00:00:00.000 Started by upstream project "autotest-per-patch" build number 132853 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.134 > git --version # 'git version 2.39.2' 00:00:00.134 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.523 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.538 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.550 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.550 > git config core.sparsecheckout # timeout=10 00:00:03.564 > git read-tree -mu HEAD # timeout=10 00:00:03.580 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.599 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.600 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.705 [Pipeline] Start of Pipeline 00:00:03.718 [Pipeline] library 00:00:03.720 Loading library shm_lib@master 00:00:03.720 Library shm_lib@master is cached. Copying from home. 00:00:03.736 [Pipeline] node 00:00:03.747 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.749 [Pipeline] { 00:00:03.759 [Pipeline] catchError 00:00:03.761 [Pipeline] { 00:00:03.773 [Pipeline] wrap 00:00:03.782 [Pipeline] { 00:00:03.790 [Pipeline] stage 00:00:03.792 [Pipeline] { (Prologue) 00:00:03.810 [Pipeline] echo 00:00:03.811 Node: VM-host-WFP7 00:00:03.817 [Pipeline] cleanWs 00:00:03.827 [WS-CLEANUP] Deleting project workspace... 00:00:03.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.834 [WS-CLEANUP] done 00:00:04.066 [Pipeline] setCustomBuildProperty 00:00:04.154 [Pipeline] httpRequest 00:00:04.489 [Pipeline] echo 00:00:04.490 Sorcerer 10.211.164.20 is alive 00:00:04.496 [Pipeline] retry 00:00:04.498 [Pipeline] { 00:00:04.506 [Pipeline] httpRequest 00:00:04.516 HttpMethod: GET 00:00:04.516 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.517 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.517 Response Code: HTTP/1.1 200 OK 00:00:04.517 Success: Status code 200 is in the accepted range: 200,404 00:00:04.518 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.824 [Pipeline] } 00:00:04.837 [Pipeline] // retry 00:00:04.843 [Pipeline] sh 00:00:05.125 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.141 [Pipeline] httpRequest 00:00:05.706 [Pipeline] echo 00:00:05.708 Sorcerer 10.211.164.20 is alive 00:00:05.715 [Pipeline] retry 00:00:05.717 [Pipeline] { 00:00:05.732 [Pipeline] httpRequest 00:00:05.736 HttpMethod: GET 00:00:05.737 URL: http://10.211.164.20/packages/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:05.737 Sending request to url: http://10.211.164.20/packages/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:05.739 Response Code: HTTP/1.1 200 OK 00:00:05.739 Success: Status code 200 is in the accepted range: 200,404 00:00:05.740 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:28.292 [Pipeline] } 00:00:28.309 [Pipeline] // retry 00:00:28.317 [Pipeline] sh 00:00:28.600 + tar --no-same-owner -xf spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:31.150 [Pipeline] sh 00:00:31.434 + git -C spdk log --oneline -n5 00:00:31.434 b9cf27559 script/rpc.py: Put python library fisrt in library path 00:00:31.434 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:31.434 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:31.434 66289a6db build: use VERSION file for storing version 00:00:31.434 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:31.449 [Pipeline] writeFile 00:00:31.462 [Pipeline] sh 00:00:31.744 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.756 [Pipeline] sh 00:00:32.039 + cat autorun-spdk.conf 00:00:32.039 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.039 SPDK_RUN_ASAN=1 00:00:32.039 SPDK_RUN_UBSAN=1 00:00:32.039 SPDK_TEST_RAID=1 00:00:32.039 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.048 RUN_NIGHTLY=0 00:00:32.050 [Pipeline] } 00:00:32.063 [Pipeline] // stage 00:00:32.077 [Pipeline] stage 00:00:32.079 [Pipeline] { (Run VM) 00:00:32.091 [Pipeline] sh 00:00:32.376 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.376 + echo 'Start stage prepare_nvme.sh' 00:00:32.376 Start stage prepare_nvme.sh 00:00:32.376 + [[ -n 7 ]] 00:00:32.376 + disk_prefix=ex7 00:00:32.376 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:32.376 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:32.376 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:32.376 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.376 ++ SPDK_RUN_ASAN=1 00:00:32.376 ++ SPDK_RUN_UBSAN=1 00:00:32.376 ++ SPDK_TEST_RAID=1 00:00:32.376 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.376 ++ RUN_NIGHTLY=0 00:00:32.376 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:32.376 + nvme_files=() 00:00:32.376 + declare -A nvme_files 00:00:32.376 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.376 + nvme_files['nvme.img']=5G 00:00:32.376 + nvme_files['nvme-cmb.img']=5G 00:00:32.376 + nvme_files['nvme-multi0.img']=4G 00:00:32.376 + nvme_files['nvme-multi1.img']=4G 00:00:32.376 + nvme_files['nvme-multi2.img']=4G 00:00:32.376 + nvme_files['nvme-openstack.img']=8G 00:00:32.376 + nvme_files['nvme-zns.img']=5G 00:00:32.376 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.376 + (( SPDK_TEST_FTL == 1 )) 00:00:32.376 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.376 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:32.376 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.376 + for nvme in "${!nvme_files[@]}" 00:00:32.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:32.635 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.635 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:32.635 + echo 'End stage prepare_nvme.sh' 00:00:32.635 End stage prepare_nvme.sh 00:00:32.647 [Pipeline] sh 00:00:32.931 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.931 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:32.931 00:00:32.931 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:32.931 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:32.931 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:32.931 HELP=0 00:00:32.931 DRY_RUN=0 00:00:32.931 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:32.931 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.931 NVME_AUTO_CREATE=0 00:00:32.931 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:32.931 NVME_CMB=,, 00:00:32.931 NVME_PMR=,, 00:00:32.931 NVME_ZNS=,, 00:00:32.931 NVME_MS=,, 00:00:32.931 NVME_FDP=,, 00:00:32.931 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.931 SPDK_VAGRANT_VMCPU=10 00:00:32.931 SPDK_VAGRANT_VMRAM=12288 00:00:32.931 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.931 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.931 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.931 SPDK_OPENSTACK_NETWORK=0 00:00:32.931 VAGRANT_PACKAGE_BOX=0 00:00:32.931 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:32.931 FORCE_DISTRO=true 00:00:32.931 VAGRANT_BOX_VERSION= 00:00:32.931 EXTRA_VAGRANTFILES= 00:00:32.931 NIC_MODEL=virtio 00:00:32.931 00:00:32.931 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:32.931 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:35.466 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.466 ==> default: Creating image (snapshot of base box volume). 00:00:35.725 ==> default: Creating domain with the following settings... 00:00:35.725 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733994849_af96464c0fb7f6d786b3 00:00:35.725 ==> default: -- Domain type: kvm 00:00:35.725 ==> default: -- Cpus: 10 00:00:35.725 ==> default: -- Feature: acpi 00:00:35.725 ==> default: -- Feature: apic 00:00:35.725 ==> default: -- Feature: pae 00:00:35.725 ==> default: -- Memory: 12288M 00:00:35.725 ==> default: -- Memory Backing: hugepages: 00:00:35.725 ==> default: -- Management MAC: 00:00:35.725 ==> default: -- Loader: 00:00:35.725 ==> default: -- Nvram: 00:00:35.725 ==> default: -- Base box: spdk/fedora39 00:00:35.725 ==> default: -- Storage pool: default 00:00:35.725 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733994849_af96464c0fb7f6d786b3.img (20G) 00:00:35.725 ==> default: -- Volume Cache: default 00:00:35.725 ==> default: -- Kernel: 00:00:35.725 ==> default: -- Initrd: 00:00:35.725 ==> default: -- Graphics Type: vnc 00:00:35.725 ==> default: -- Graphics Port: -1 00:00:35.725 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.725 ==> default: -- Graphics Password: Not defined 00:00:35.725 ==> default: -- Video Type: cirrus 00:00:35.725 ==> default: -- Video VRAM: 9216 00:00:35.725 ==> default: -- Sound Type: 00:00:35.725 ==> default: -- Keymap: en-us 00:00:35.725 ==> default: -- TPM Path: 00:00:35.725 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.725 ==> default: -- Command line args: 00:00:35.725 ==> default: -> value=-device, 00:00:35.725 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.725 ==> default: -> value=-drive, 00:00:35.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.725 ==> default: -> value=-device, 00:00:35.725 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.725 ==> default: -> value=-device, 00:00:35.725 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.725 ==> default: -> value=-drive, 00:00:35.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.725 ==> default: -> value=-device, 00:00:35.725 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.725 ==> default: -> value=-drive, 00:00:35.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.726 ==> default: -> value=-device, 00:00:35.726 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.726 ==> default: -> value=-drive, 00:00:35.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.726 ==> default: -> value=-device, 00:00:35.726 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.985 ==> default: Creating shared folders metadata... 00:00:35.985 ==> default: Starting domain. 00:00:37.365 ==> default: Waiting for domain to get an IP address... 00:00:55.487 ==> default: Waiting for SSH to become available... 00:00:55.487 ==> default: Configuring and enabling network interfaces... 00:00:59.687 default: SSH address: 192.168.121.38:22 00:00:59.687 default: SSH username: vagrant 00:00:59.687 default: SSH auth method: private key 00:01:03.005 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:11.121 ==> default: Mounting SSHFS shared folder... 00:01:12.513 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:12.513 ==> default: Checking Mount.. 00:01:14.417 ==> default: Folder Successfully Mounted! 00:01:14.417 ==> default: Running provisioner: file... 00:01:15.353 default: ~/.gitconfig => .gitconfig 00:01:15.921 00:01:15.921 SUCCESS! 00:01:15.921 00:01:15.921 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:15.921 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:15.921 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:15.921 00:01:15.930 [Pipeline] } 00:01:15.945 [Pipeline] // stage 00:01:15.954 [Pipeline] dir 00:01:15.955 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:15.957 [Pipeline] { 00:01:15.969 [Pipeline] catchError 00:01:15.971 [Pipeline] { 00:01:15.984 [Pipeline] sh 00:01:16.266 + vagrant ssh-config --host vagrant 00:01:16.266 + sed -ne /^Host/,$p 00:01:16.266 + tee ssh_conf 00:01:18.805 Host vagrant 00:01:18.805 HostName 192.168.121.38 00:01:18.805 User vagrant 00:01:18.805 Port 22 00:01:18.805 UserKnownHostsFile /dev/null 00:01:18.805 StrictHostKeyChecking no 00:01:18.805 PasswordAuthentication no 00:01:18.805 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:18.805 IdentitiesOnly yes 00:01:18.805 LogLevel FATAL 00:01:18.805 ForwardAgent yes 00:01:18.805 ForwardX11 yes 00:01:18.805 00:01:18.819 [Pipeline] withEnv 00:01:18.822 [Pipeline] { 00:01:18.835 [Pipeline] sh 00:01:19.117 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:19.117 source /etc/os-release 00:01:19.117 [[ -e /image.version ]] && img=$(< /image.version) 00:01:19.117 # Minimal, systemd-like check. 00:01:19.117 if [[ -e /.dockerenv ]]; then 00:01:19.117 # Clear garbage from the node's name: 00:01:19.117 # agt-er_autotest_547-896 -> autotest_547-896 00:01:19.117 # $HOSTNAME is the actual container id 00:01:19.117 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:19.117 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:19.117 # We can assume this is a mount from a host where container is running, 00:01:19.117 # so fetch its hostname to easily identify the target swarm worker. 00:01:19.117 container="$(< /etc/hostname) ($agent)" 00:01:19.117 else 00:01:19.117 # Fallback 00:01:19.117 container=$agent 00:01:19.117 fi 00:01:19.117 fi 00:01:19.117 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:19.117 00:01:19.387 [Pipeline] } 00:01:19.402 [Pipeline] // withEnv 00:01:19.409 [Pipeline] setCustomBuildProperty 00:01:19.422 [Pipeline] stage 00:01:19.423 [Pipeline] { (Tests) 00:01:19.438 [Pipeline] sh 00:01:19.719 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:20.025 [Pipeline] sh 00:01:20.305 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:20.577 [Pipeline] timeout 00:01:20.578 Timeout set to expire in 1 hr 30 min 00:01:20.579 [Pipeline] { 00:01:20.593 [Pipeline] sh 00:01:20.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:21.442 HEAD is now at b9cf27559 script/rpc.py: Put python library fisrt in library path 00:01:21.453 [Pipeline] sh 00:01:21.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:22.005 [Pipeline] sh 00:01:22.287 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:22.585 [Pipeline] sh 00:01:22.863 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:23.122 ++ readlink -f spdk_repo 00:01:23.122 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.122 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.122 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.122 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.122 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.122 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.122 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.122 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:23.122 + cd /home/vagrant/spdk_repo 00:01:23.122 + source /etc/os-release 00:01:23.122 ++ NAME='Fedora Linux' 00:01:23.122 ++ VERSION='39 (Cloud Edition)' 00:01:23.122 ++ ID=fedora 00:01:23.122 ++ VERSION_ID=39 00:01:23.122 ++ VERSION_CODENAME= 00:01:23.122 ++ PLATFORM_ID=platform:f39 00:01:23.122 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.122 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.122 ++ LOGO=fedora-logo-icon 00:01:23.122 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.122 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.122 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.122 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.122 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.122 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.122 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.122 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.122 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.122 ++ SUPPORT_END=2024-11-12 00:01:23.122 ++ VARIANT='Cloud Edition' 00:01:23.122 ++ VARIANT_ID=cloud 00:01:23.122 + uname -a 00:01:23.122 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:23.122 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:23.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:23.690 Hugepages 00:01:23.690 node hugesize free / total 00:01:23.690 node0 1048576kB 0 / 0 00:01:23.690 node0 2048kB 0 / 0 00:01:23.690 00:01:23.690 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.690 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:23.690 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:23.690 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:23.690 + rm -f /tmp/spdk-ld-path 00:01:23.690 + source autorun-spdk.conf 00:01:23.690 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.690 ++ SPDK_RUN_ASAN=1 00:01:23.690 ++ SPDK_RUN_UBSAN=1 00:01:23.690 ++ SPDK_TEST_RAID=1 00:01:23.690 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.690 ++ RUN_NIGHTLY=0 00:01:23.690 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.690 + [[ -n '' ]] 00:01:23.690 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:23.690 + for M in /var/spdk/build-*-manifest.txt 00:01:23.690 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.690 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.690 + for M in /var/spdk/build-*-manifest.txt 00:01:23.690 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.690 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.690 + for M in /var/spdk/build-*-manifest.txt 00:01:23.690 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.690 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.950 ++ uname 00:01:23.950 + [[ Linux == \L\i\n\u\x ]] 00:01:23.950 + sudo dmesg -T 00:01:23.950 + sudo dmesg --clear 00:01:23.950 + dmesg_pid=5430 00:01:23.950 + [[ Fedora Linux == FreeBSD ]] 00:01:23.950 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.950 + sudo dmesg -Tw 00:01:23.950 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.950 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.950 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.950 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.950 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.950 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.950 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.950 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.950 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.950 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.950 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.950 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.950 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.950 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.950 09:14:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.950 09:14:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.950 09:14:57 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:23.950 09:14:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.950 09:14:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.208 09:14:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:24.208 09:14:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.208 09:14:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.208 09:14:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.208 09:14:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.208 09:14:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.208 09:14:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.208 09:14:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.208 09:14:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.208 09:14:57 -- paths/export.sh@5 -- $ export PATH 00:01:24.208 09:14:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.208 09:14:57 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.208 09:14:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.208 09:14:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733994898.XXXXXX 00:01:24.208 09:14:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733994898.5t9PwS 00:01:24.208 09:14:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.208 09:14:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.208 09:14:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.208 09:14:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.208 09:14:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.208 09:14:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.208 09:14:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.208 09:14:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.208 09:14:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:24.208 09:14:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.208 09:14:58 -- pm/common@17 -- $ local monitor 00:01:24.208 09:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.208 09:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.208 09:14:58 -- pm/common@25 -- $ sleep 1 00:01:24.208 09:14:58 -- pm/common@21 -- $ date +%s 00:01:24.208 09:14:58 -- pm/common@21 -- $ date +%s 00:01:24.208 09:14:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733994898 00:01:24.208 09:14:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733994898 00:01:24.208 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733994898_collect-cpu-load.pm.log 00:01:24.208 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733994898_collect-vmstat.pm.log 00:01:25.146 09:14:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.146 09:14:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.146 09:14:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.146 09:14:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.146 09:14:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.146 Thu Dec 12 09:14:59 AM UTC 2024 00:01:25.146 09:14:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.146 v25.01-rc1-2-gb9cf27559 00:01:25.146 09:14:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:25.146 09:14:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:25.146 09:14:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.146 09:14:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.146 09:14:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.146 ************************************ 00:01:25.146 START TEST asan 00:01:25.146 ************************************ 00:01:25.146 using asan 00:01:25.146 09:14:59 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:25.146 00:01:25.146 real 0m0.001s 00:01:25.146 user 0m0.000s 00:01:25.146 sys 0m0.001s 00:01:25.146 09:14:59 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.146 09:14:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.146 ************************************ 00:01:25.146 END TEST asan 00:01:25.146 ************************************ 00:01:25.146 09:14:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.146 09:14:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.146 09:14:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.146 09:14:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.146 09:14:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.146 ************************************ 00:01:25.146 START TEST ubsan 00:01:25.146 ************************************ 00:01:25.146 using ubsan 00:01:25.146 09:14:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.146 00:01:25.146 real 0m0.001s 00:01:25.146 user 0m0.000s 00:01:25.146 sys 0m0.000s 00:01:25.146 09:14:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.146 09:14:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.146 ************************************ 00:01:25.146 END TEST ubsan 00:01:25.146 ************************************ 00:01:25.405 09:14:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.405 09:14:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.405 09:14:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.405 09:14:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:25.405 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.405 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:25.972 Using 'verbs' RDMA provider 00:01:41.802 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.707 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.273 Creating mk/config.mk...done. 00:01:57.273 Creating mk/cc.flags.mk...done. 00:01:57.273 Type 'make' to build. 00:01:57.273 09:15:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:57.273 09:15:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.273 09:15:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.273 09:15:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.273 ************************************ 00:01:57.274 START TEST make 00:01:57.274 ************************************ 00:01:57.274 09:15:31 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:09.475 The Meson build system 00:02:09.475 Version: 1.5.0 00:02:09.475 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.475 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.475 Build type: native build 00:02:09.475 Program cat found: YES (/usr/bin/cat) 00:02:09.475 Project name: DPDK 00:02:09.475 Project version: 24.03.0 00:02:09.475 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.475 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.475 Host machine cpu family: x86_64 00:02:09.475 Host machine cpu: x86_64 00:02:09.475 Message: ## Building in Developer Mode ## 00:02:09.475 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.475 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.475 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.475 Program python3 found: YES (/usr/bin/python3) 00:02:09.475 Program cat found: YES (/usr/bin/cat) 00:02:09.475 Compiler for C supports arguments -march=native: YES 00:02:09.475 Checking for size of "void *" : 8 00:02:09.475 Checking for size of "void *" : 8 (cached) 00:02:09.475 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:09.475 Library m found: YES 00:02:09.475 Library numa found: YES 00:02:09.475 Has header "numaif.h" : YES 00:02:09.475 Library fdt found: NO 00:02:09.475 Library execinfo found: NO 00:02:09.475 Has header "execinfo.h" : YES 00:02:09.475 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.475 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.475 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.475 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.475 Run-time dependency openssl found: YES 3.1.1 00:02:09.475 Run-time dependency libpcap found: YES 1.10.4 00:02:09.475 Has header "pcap.h" with dependency libpcap: YES 00:02:09.475 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.475 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.475 Compiler for C supports arguments -Wformat: YES 00:02:09.475 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.475 Compiler for C supports arguments -Wformat-security: NO 00:02:09.475 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.475 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.475 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.475 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.475 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.475 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.475 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.475 Compiler for C supports arguments -Wundef: YES 00:02:09.475 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.475 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.475 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.475 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.475 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.475 Program objdump found: YES (/usr/bin/objdump) 00:02:09.475 Compiler for C supports arguments -mavx512f: YES 00:02:09.475 Checking if "AVX512 checking" compiles: YES 00:02:09.475 Fetching value of define "__SSE4_2__" : 1 00:02:09.475 Fetching value of define "__AES__" : 1 00:02:09.475 Fetching value of define "__AVX__" : 1 00:02:09.475 Fetching value of define "__AVX2__" : 1 00:02:09.475 Fetching value of define "__AVX512BW__" : 1 00:02:09.475 Fetching value of define "__AVX512CD__" : 1 00:02:09.475 Fetching value of define "__AVX512DQ__" : 1 00:02:09.475 Fetching value of define "__AVX512F__" : 1 00:02:09.475 Fetching value of define "__AVX512VL__" : 1 00:02:09.475 Fetching value of define "__PCLMUL__" : 1 00:02:09.475 Fetching value of define "__RDRND__" : 1 00:02:09.475 Fetching value of define "__RDSEED__" : 1 00:02:09.475 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.475 Fetching value of define "__znver1__" : (undefined) 00:02:09.475 Fetching value of define "__znver2__" : (undefined) 00:02:09.475 Fetching value of define "__znver3__" : (undefined) 00:02:09.475 Fetching value of define "__znver4__" : (undefined) 00:02:09.475 Library asan found: YES 00:02:09.475 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.475 Message: lib/log: Defining dependency "log" 00:02:09.475 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.475 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.475 Library rt found: YES 00:02:09.475 Checking for function "getentropy" : NO 00:02:09.475 Message: lib/eal: Defining dependency "eal" 00:02:09.475 Message: lib/ring: Defining dependency "ring" 00:02:09.475 Message: lib/rcu: Defining dependency "rcu" 00:02:09.475 Message: lib/mempool: Defining dependency "mempool" 00:02:09.475 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.475 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.475 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.475 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.475 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:09.475 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:09.475 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:09.475 Compiler for C supports arguments -mpclmul: YES 00:02:09.475 Compiler for C supports arguments -maes: YES 00:02:09.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.475 Compiler for C supports arguments -mavx512bw: YES 00:02:09.475 Compiler for C supports arguments -mavx512dq: YES 00:02:09.475 Compiler for C supports arguments -mavx512vl: YES 00:02:09.475 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.475 Compiler for C supports arguments -mavx2: YES 00:02:09.475 Compiler for C supports arguments -mavx: YES 00:02:09.475 Message: lib/net: Defining dependency "net" 00:02:09.475 Message: lib/meter: Defining dependency "meter" 00:02:09.475 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.475 Message: lib/pci: Defining dependency "pci" 00:02:09.475 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.475 Message: lib/hash: Defining dependency "hash" 00:02:09.475 Message: lib/timer: Defining dependency "timer" 00:02:09.475 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.475 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.475 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.475 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.475 Message: lib/power: Defining dependency "power" 00:02:09.475 Message: lib/reorder: Defining dependency "reorder" 00:02:09.475 Message: lib/security: Defining dependency "security" 00:02:09.475 Has header "linux/userfaultfd.h" : YES 00:02:09.475 Has header "linux/vduse.h" : YES 00:02:09.475 Message: lib/vhost: Defining dependency "vhost" 00:02:09.475 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.475 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.476 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.476 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.476 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.476 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.476 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.476 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.476 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.476 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.476 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.476 Configuring doxy-api-html.conf using configuration 00:02:09.476 Configuring doxy-api-man.conf using configuration 00:02:09.476 Program mandb found: YES (/usr/bin/mandb) 00:02:09.476 Program sphinx-build found: NO 00:02:09.476 Configuring rte_build_config.h using configuration 00:02:09.476 Message: 00:02:09.476 ================= 00:02:09.476 Applications Enabled 00:02:09.476 ================= 00:02:09.476 00:02:09.476 apps: 00:02:09.476 00:02:09.476 00:02:09.476 Message: 00:02:09.476 ================= 00:02:09.476 Libraries Enabled 00:02:09.476 ================= 00:02:09.476 00:02:09.476 libs: 00:02:09.476 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.476 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.476 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.476 00:02:09.476 Message: 00:02:09.476 =============== 00:02:09.476 Drivers Enabled 00:02:09.476 =============== 00:02:09.476 00:02:09.476 common: 00:02:09.476 00:02:09.476 bus: 00:02:09.476 pci, vdev, 00:02:09.476 mempool: 00:02:09.476 ring, 00:02:09.476 dma: 00:02:09.476 00:02:09.476 net: 00:02:09.476 00:02:09.476 crypto: 00:02:09.476 00:02:09.476 compress: 00:02:09.476 00:02:09.476 vdpa: 00:02:09.476 00:02:09.476 00:02:09.476 Message: 00:02:09.476 ================= 00:02:09.476 Content Skipped 00:02:09.476 ================= 00:02:09.476 00:02:09.476 apps: 00:02:09.476 dumpcap: explicitly disabled via build config 00:02:09.476 graph: explicitly disabled via build config 00:02:09.476 pdump: explicitly disabled via build config 00:02:09.476 proc-info: explicitly disabled via build config 00:02:09.476 test-acl: explicitly disabled via build config 00:02:09.476 test-bbdev: explicitly disabled via build config 00:02:09.476 test-cmdline: explicitly disabled via build config 00:02:09.476 test-compress-perf: explicitly disabled via build config 00:02:09.476 test-crypto-perf: explicitly disabled via build config 00:02:09.476 test-dma-perf: explicitly disabled via build config 00:02:09.476 test-eventdev: explicitly disabled via build config 00:02:09.476 test-fib: explicitly disabled via build config 00:02:09.476 test-flow-perf: explicitly disabled via build config 00:02:09.476 test-gpudev: explicitly disabled via build config 00:02:09.476 test-mldev: explicitly disabled via build config 00:02:09.476 test-pipeline: explicitly disabled via build config 00:02:09.476 test-pmd: explicitly disabled via build config 00:02:09.476 test-regex: explicitly disabled via build config 00:02:09.476 test-sad: explicitly disabled via build config 00:02:09.476 test-security-perf: explicitly disabled via build config 00:02:09.476 00:02:09.476 libs: 00:02:09.476 argparse: explicitly disabled via build config 00:02:09.476 metrics: explicitly disabled via build config 00:02:09.476 acl: explicitly disabled via build config 00:02:09.476 bbdev: explicitly disabled via build config 00:02:09.476 bitratestats: explicitly disabled via build config 00:02:09.476 bpf: explicitly disabled via build config 00:02:09.476 cfgfile: explicitly disabled via build config 00:02:09.476 distributor: explicitly disabled via build config 00:02:09.476 efd: explicitly disabled via build config 00:02:09.476 eventdev: explicitly disabled via build config 00:02:09.476 dispatcher: explicitly disabled via build config 00:02:09.476 gpudev: explicitly disabled via build config 00:02:09.476 gro: explicitly disabled via build config 00:02:09.476 gso: explicitly disabled via build config 00:02:09.476 ip_frag: explicitly disabled via build config 00:02:09.476 jobstats: explicitly disabled via build config 00:02:09.476 latencystats: explicitly disabled via build config 00:02:09.476 lpm: explicitly disabled via build config 00:02:09.476 member: explicitly disabled via build config 00:02:09.476 pcapng: explicitly disabled via build config 00:02:09.476 rawdev: explicitly disabled via build config 00:02:09.476 regexdev: explicitly disabled via build config 00:02:09.476 mldev: explicitly disabled via build config 00:02:09.476 rib: explicitly disabled via build config 00:02:09.476 sched: explicitly disabled via build config 00:02:09.476 stack: explicitly disabled via build config 00:02:09.476 ipsec: explicitly disabled via build config 00:02:09.476 pdcp: explicitly disabled via build config 00:02:09.476 fib: explicitly disabled via build config 00:02:09.476 port: explicitly disabled via build config 00:02:09.476 pdump: explicitly disabled via build config 00:02:09.476 table: explicitly disabled via build config 00:02:09.476 pipeline: explicitly disabled via build config 00:02:09.476 graph: explicitly disabled via build config 00:02:09.476 node: explicitly disabled via build config 00:02:09.476 00:02:09.476 drivers: 00:02:09.476 common/cpt: not in enabled drivers build config 00:02:09.476 common/dpaax: not in enabled drivers build config 00:02:09.476 common/iavf: not in enabled drivers build config 00:02:09.476 common/idpf: not in enabled drivers build config 00:02:09.476 common/ionic: not in enabled drivers build config 00:02:09.476 common/mvep: not in enabled drivers build config 00:02:09.476 common/octeontx: not in enabled drivers build config 00:02:09.476 bus/auxiliary: not in enabled drivers build config 00:02:09.476 bus/cdx: not in enabled drivers build config 00:02:09.476 bus/dpaa: not in enabled drivers build config 00:02:09.476 bus/fslmc: not in enabled drivers build config 00:02:09.476 bus/ifpga: not in enabled drivers build config 00:02:09.476 bus/platform: not in enabled drivers build config 00:02:09.476 bus/uacce: not in enabled drivers build config 00:02:09.476 bus/vmbus: not in enabled drivers build config 00:02:09.476 common/cnxk: not in enabled drivers build config 00:02:09.476 common/mlx5: not in enabled drivers build config 00:02:09.476 common/nfp: not in enabled drivers build config 00:02:09.476 common/nitrox: not in enabled drivers build config 00:02:09.476 common/qat: not in enabled drivers build config 00:02:09.476 common/sfc_efx: not in enabled drivers build config 00:02:09.476 mempool/bucket: not in enabled drivers build config 00:02:09.476 mempool/cnxk: not in enabled drivers build config 00:02:09.476 mempool/dpaa: not in enabled drivers build config 00:02:09.476 mempool/dpaa2: not in enabled drivers build config 00:02:09.476 mempool/octeontx: not in enabled drivers build config 00:02:09.476 mempool/stack: not in enabled drivers build config 00:02:09.476 dma/cnxk: not in enabled drivers build config 00:02:09.476 dma/dpaa: not in enabled drivers build config 00:02:09.476 dma/dpaa2: not in enabled drivers build config 00:02:09.476 dma/hisilicon: not in enabled drivers build config 00:02:09.476 dma/idxd: not in enabled drivers build config 00:02:09.476 dma/ioat: not in enabled drivers build config 00:02:09.476 dma/skeleton: not in enabled drivers build config 00:02:09.476 net/af_packet: not in enabled drivers build config 00:02:09.476 net/af_xdp: not in enabled drivers build config 00:02:09.476 net/ark: not in enabled drivers build config 00:02:09.476 net/atlantic: not in enabled drivers build config 00:02:09.476 net/avp: not in enabled drivers build config 00:02:09.476 net/axgbe: not in enabled drivers build config 00:02:09.476 net/bnx2x: not in enabled drivers build config 00:02:09.476 net/bnxt: not in enabled drivers build config 00:02:09.476 net/bonding: not in enabled drivers build config 00:02:09.476 net/cnxk: not in enabled drivers build config 00:02:09.476 net/cpfl: not in enabled drivers build config 00:02:09.476 net/cxgbe: not in enabled drivers build config 00:02:09.476 net/dpaa: not in enabled drivers build config 00:02:09.476 net/dpaa2: not in enabled drivers build config 00:02:09.476 net/e1000: not in enabled drivers build config 00:02:09.476 net/ena: not in enabled drivers build config 00:02:09.476 net/enetc: not in enabled drivers build config 00:02:09.476 net/enetfec: not in enabled drivers build config 00:02:09.476 net/enic: not in enabled drivers build config 00:02:09.476 net/failsafe: not in enabled drivers build config 00:02:09.476 net/fm10k: not in enabled drivers build config 00:02:09.476 net/gve: not in enabled drivers build config 00:02:09.476 net/hinic: not in enabled drivers build config 00:02:09.476 net/hns3: not in enabled drivers build config 00:02:09.476 net/i40e: not in enabled drivers build config 00:02:09.476 net/iavf: not in enabled drivers build config 00:02:09.476 net/ice: not in enabled drivers build config 00:02:09.476 net/idpf: not in enabled drivers build config 00:02:09.476 net/igc: not in enabled drivers build config 00:02:09.476 net/ionic: not in enabled drivers build config 00:02:09.476 net/ipn3ke: not in enabled drivers build config 00:02:09.476 net/ixgbe: not in enabled drivers build config 00:02:09.476 net/mana: not in enabled drivers build config 00:02:09.476 net/memif: not in enabled drivers build config 00:02:09.476 net/mlx4: not in enabled drivers build config 00:02:09.476 net/mlx5: not in enabled drivers build config 00:02:09.476 net/mvneta: not in enabled drivers build config 00:02:09.476 net/mvpp2: not in enabled drivers build config 00:02:09.476 net/netvsc: not in enabled drivers build config 00:02:09.476 net/nfb: not in enabled drivers build config 00:02:09.476 net/nfp: not in enabled drivers build config 00:02:09.476 net/ngbe: not in enabled drivers build config 00:02:09.476 net/null: not in enabled drivers build config 00:02:09.476 net/octeontx: not in enabled drivers build config 00:02:09.476 net/octeon_ep: not in enabled drivers build config 00:02:09.476 net/pcap: not in enabled drivers build config 00:02:09.476 net/pfe: not in enabled drivers build config 00:02:09.476 net/qede: not in enabled drivers build config 00:02:09.476 net/ring: not in enabled drivers build config 00:02:09.476 net/sfc: not in enabled drivers build config 00:02:09.476 net/softnic: not in enabled drivers build config 00:02:09.476 net/tap: not in enabled drivers build config 00:02:09.476 net/thunderx: not in enabled drivers build config 00:02:09.476 net/txgbe: not in enabled drivers build config 00:02:09.476 net/vdev_netvsc: not in enabled drivers build config 00:02:09.476 net/vhost: not in enabled drivers build config 00:02:09.476 net/virtio: not in enabled drivers build config 00:02:09.476 net/vmxnet3: not in enabled drivers build config 00:02:09.476 raw/*: missing internal dependency, "rawdev" 00:02:09.477 crypto/armv8: not in enabled drivers build config 00:02:09.477 crypto/bcmfs: not in enabled drivers build config 00:02:09.477 crypto/caam_jr: not in enabled drivers build config 00:02:09.477 crypto/ccp: not in enabled drivers build config 00:02:09.477 crypto/cnxk: not in enabled drivers build config 00:02:09.477 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.477 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.477 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.477 crypto/mlx5: not in enabled drivers build config 00:02:09.477 crypto/mvsam: not in enabled drivers build config 00:02:09.477 crypto/nitrox: not in enabled drivers build config 00:02:09.477 crypto/null: not in enabled drivers build config 00:02:09.477 crypto/octeontx: not in enabled drivers build config 00:02:09.477 crypto/openssl: not in enabled drivers build config 00:02:09.477 crypto/scheduler: not in enabled drivers build config 00:02:09.477 crypto/uadk: not in enabled drivers build config 00:02:09.477 crypto/virtio: not in enabled drivers build config 00:02:09.477 compress/isal: not in enabled drivers build config 00:02:09.477 compress/mlx5: not in enabled drivers build config 00:02:09.477 compress/nitrox: not in enabled drivers build config 00:02:09.477 compress/octeontx: not in enabled drivers build config 00:02:09.477 compress/zlib: not in enabled drivers build config 00:02:09.477 regex/*: missing internal dependency, "regexdev" 00:02:09.477 ml/*: missing internal dependency, "mldev" 00:02:09.477 vdpa/ifc: not in enabled drivers build config 00:02:09.477 vdpa/mlx5: not in enabled drivers build config 00:02:09.477 vdpa/nfp: not in enabled drivers build config 00:02:09.477 vdpa/sfc: not in enabled drivers build config 00:02:09.477 event/*: missing internal dependency, "eventdev" 00:02:09.477 baseband/*: missing internal dependency, "bbdev" 00:02:09.477 gpu/*: missing internal dependency, "gpudev" 00:02:09.477 00:02:09.477 00:02:09.477 Build targets in project: 85 00:02:09.477 00:02:09.477 DPDK 24.03.0 00:02:09.477 00:02:09.477 User defined options 00:02:09.477 buildtype : debug 00:02:09.477 default_library : shared 00:02:09.477 libdir : lib 00:02:09.477 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.477 b_sanitize : address 00:02:09.477 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.477 c_link_args : 00:02:09.477 cpu_instruction_set: native 00:02:09.477 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:09.477 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:09.477 enable_docs : false 00:02:09.477 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:09.477 enable_kmods : false 00:02:09.477 max_lcores : 128 00:02:09.477 tests : false 00:02:09.477 00:02:09.477 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.477 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:09.477 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.477 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.477 [3/268] Linking static target lib/librte_kvargs.a 00:02:09.477 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.477 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.477 [6/268] Linking static target lib/librte_log.a 00:02:09.477 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.477 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.477 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.477 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.477 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.477 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.477 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.477 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.477 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.477 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.477 [17/268] Linking static target lib/librte_telemetry.a 00:02:09.477 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.735 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.735 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.735 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.994 [22/268] Linking target lib/librte_log.so.24.1 00:02:09.994 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.994 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.994 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.994 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.994 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.994 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.994 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.994 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.253 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.253 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.253 [33/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.253 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.520 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.520 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.520 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.520 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.520 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.520 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.520 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.520 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.520 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.520 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.520 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.795 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.795 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.055 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.055 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.055 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.055 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.313 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.313 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.313 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.313 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.313 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.571 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.571 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.571 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.571 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.831 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.831 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.831 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.831 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.831 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.831 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.090 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.091 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.091 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.350 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.350 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.350 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.350 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.350 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.609 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.609 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.609 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.609 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.609 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.869 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.869 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.869 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.869 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.128 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.128 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.128 [86/268] Linking static target lib/librte_ring.a 00:02:13.128 [87/268] Linking static target lib/librte_eal.a 00:02:13.128 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.388 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.388 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.388 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.388 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.388 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.388 [94/268] Linking static target lib/librte_mempool.a 00:02:13.388 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.648 [96/268] Linking static target lib/librte_rcu.a 00:02:13.648 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.648 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.907 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.907 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.907 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.907 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.907 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.907 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.167 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.167 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.167 [107/268] Linking static target lib/librte_net.a 00:02:14.167 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.167 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.167 [110/268] Linking static target lib/librte_mbuf.a 00:02:14.167 [111/268] Linking static target lib/librte_meter.a 00:02:14.429 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.430 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.430 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.430 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.430 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.693 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.693 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.952 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.952 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.212 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.212 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.212 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.471 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.471 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.471 [126/268] Linking static target lib/librte_pci.a 00:02:15.471 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.471 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.809 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.809 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.809 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.809 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.809 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.809 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.809 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.809 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.809 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.809 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.068 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.068 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.068 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.068 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.068 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.068 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.068 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.068 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.068 [147/268] Linking static target lib/librte_cmdline.a 00:02:16.327 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.587 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.587 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.587 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.587 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.587 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.587 [154/268] Linking static target lib/librte_timer.a 00:02:16.848 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.109 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.109 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.109 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.109 [159/268] Linking static target lib/librte_ethdev.a 00:02:17.109 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.369 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.369 [162/268] Linking static target lib/librte_compressdev.a 00:02:17.369 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.369 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.369 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.369 [166/268] Linking static target lib/librte_hash.a 00:02:17.630 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.630 [168/268] Linking static target lib/librte_dmadev.a 00:02:17.630 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.630 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.630 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.890 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.890 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.149 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.149 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.149 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.149 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.150 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.409 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.409 [180/268] Linking static target lib/librte_cryptodev.a 00:02:18.409 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.409 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.409 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.669 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.669 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.669 [186/268] Linking static target lib/librte_power.a 00:02:18.928 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.928 [188/268] Linking static target lib/librte_reorder.a 00:02:18.928 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.928 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.928 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.928 [192/268] Linking static target lib/librte_security.a 00:02:18.928 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.497 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.497 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.756 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.756 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.756 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.015 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.015 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.015 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.275 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.275 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.275 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.535 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.535 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.535 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.535 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.535 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.795 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.795 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.795 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.795 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.795 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.795 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.054 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.054 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.054 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.054 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:21.054 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.054 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.312 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.312 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.312 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.312 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.312 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:21.571 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.509 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.899 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.900 [230/268] Linking target lib/librte_eal.so.24.1 00:02:24.160 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.160 [232/268] Linking target lib/librte_ring.so.24.1 00:02:24.160 [233/268] Linking target lib/librte_meter.so.24.1 00:02:24.160 [234/268] Linking target lib/librte_timer.so.24.1 00:02:24.160 [235/268] Linking target lib/librte_pci.so.24.1 00:02:24.160 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.160 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.160 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.160 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.419 [240/268] Linking target lib/librte_rcu.so.24.1 00:02:24.419 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.420 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.420 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.420 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.420 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.420 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.420 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.420 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.420 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.680 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.680 [251/268] Linking target lib/librte_net.so.24.1 00:02:24.680 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:24.680 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.680 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.950 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.950 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.950 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.950 [258/268] Linking target lib/librte_hash.so.24.1 00:02:24.950 [259/268] Linking target lib/librte_security.so.24.1 00:02:24.950 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.901 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.901 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.161 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.161 [264/268] Linking target lib/librte_power.so.24.1 00:02:26.421 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.421 [266/268] Linking static target lib/librte_vhost.a 00:02:28.962 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.962 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:28.962 INFO: autodetecting backend as ninja 00:02:28.962 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:47.058 CC lib/ut_mock/mock.o 00:02:47.058 CC lib/ut/ut.o 00:02:47.058 CC lib/log/log.o 00:02:47.058 CC lib/log/log_flags.o 00:02:47.058 CC lib/log/log_deprecated.o 00:02:47.058 LIB libspdk_ut_mock.a 00:02:47.058 LIB libspdk_ut.a 00:02:47.058 SO libspdk_ut_mock.so.6.0 00:02:47.058 LIB libspdk_log.a 00:02:47.058 SO libspdk_ut.so.2.0 00:02:47.318 SYMLINK libspdk_ut_mock.so 00:02:47.318 SO libspdk_log.so.7.1 00:02:47.318 SYMLINK libspdk_ut.so 00:02:47.318 SYMLINK libspdk_log.so 00:02:47.578 CXX lib/trace_parser/trace.o 00:02:47.578 CC lib/util/base64.o 00:02:47.578 CC lib/util/bit_array.o 00:02:47.578 CC lib/dma/dma.o 00:02:47.578 CC lib/util/crc16.o 00:02:47.578 CC lib/util/cpuset.o 00:02:47.578 CC lib/util/crc32.o 00:02:47.578 CC lib/util/crc32c.o 00:02:47.578 CC lib/ioat/ioat.o 00:02:47.578 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.838 CC lib/vfio_user/host/vfio_user.o 00:02:47.838 CC lib/util/crc32_ieee.o 00:02:47.838 CC lib/util/crc64.o 00:02:47.838 CC lib/util/dif.o 00:02:47.838 CC lib/util/fd.o 00:02:47.838 LIB libspdk_dma.a 00:02:47.838 CC lib/util/fd_group.o 00:02:47.838 SO libspdk_dma.so.5.0 00:02:47.838 CC lib/util/file.o 00:02:47.838 CC lib/util/hexlify.o 00:02:47.838 SYMLINK libspdk_dma.so 00:02:47.838 CC lib/util/iov.o 00:02:47.838 LIB libspdk_ioat.a 00:02:47.838 CC lib/util/math.o 00:02:47.838 SO libspdk_ioat.so.7.0 00:02:47.838 CC lib/util/net.o 00:02:48.097 LIB libspdk_vfio_user.a 00:02:48.097 SYMLINK libspdk_ioat.so 00:02:48.097 CC lib/util/pipe.o 00:02:48.097 CC lib/util/strerror_tls.o 00:02:48.097 SO libspdk_vfio_user.so.5.0 00:02:48.097 CC lib/util/string.o 00:02:48.097 SYMLINK libspdk_vfio_user.so 00:02:48.097 CC lib/util/uuid.o 00:02:48.097 CC lib/util/xor.o 00:02:48.097 CC lib/util/zipf.o 00:02:48.097 CC lib/util/md5.o 00:02:48.356 LIB libspdk_util.a 00:02:48.616 LIB libspdk_trace_parser.a 00:02:48.616 SO libspdk_util.so.10.1 00:02:48.616 SO libspdk_trace_parser.so.6.0 00:02:48.616 SYMLINK libspdk_trace_parser.so 00:02:48.616 SYMLINK libspdk_util.so 00:02:48.875 CC lib/rdma_utils/rdma_utils.o 00:02:48.875 CC lib/json/json_parse.o 00:02:48.875 CC lib/json/json_write.o 00:02:48.875 CC lib/json/json_util.o 00:02:48.875 CC lib/conf/conf.o 00:02:48.875 CC lib/vmd/vmd.o 00:02:48.875 CC lib/vmd/led.o 00:02:48.875 CC lib/idxd/idxd.o 00:02:48.875 CC lib/idxd/idxd_user.o 00:02:48.875 CC lib/env_dpdk/env.o 00:02:49.134 CC lib/env_dpdk/memory.o 00:02:49.134 LIB libspdk_conf.a 00:02:49.134 CC lib/env_dpdk/pci.o 00:02:49.134 SO libspdk_conf.so.6.0 00:02:49.134 CC lib/env_dpdk/init.o 00:02:49.134 CC lib/env_dpdk/threads.o 00:02:49.134 LIB libspdk_rdma_utils.a 00:02:49.134 LIB libspdk_json.a 00:02:49.134 SYMLINK libspdk_conf.so 00:02:49.134 CC lib/env_dpdk/pci_ioat.o 00:02:49.134 SO libspdk_rdma_utils.so.1.0 00:02:49.134 SO libspdk_json.so.6.0 00:02:49.393 SYMLINK libspdk_rdma_utils.so 00:02:49.393 SYMLINK libspdk_json.so 00:02:49.393 CC lib/env_dpdk/pci_virtio.o 00:02:49.393 CC lib/env_dpdk/pci_vmd.o 00:02:49.393 CC lib/rdma_provider/common.o 00:02:49.393 CC lib/env_dpdk/pci_idxd.o 00:02:49.393 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.393 CC lib/idxd/idxd_kernel.o 00:02:49.393 CC lib/env_dpdk/pci_event.o 00:02:49.652 CC lib/env_dpdk/sigbus_handler.o 00:02:49.652 CC lib/env_dpdk/pci_dpdk.o 00:02:49.652 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.652 LIB libspdk_vmd.a 00:02:49.652 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.652 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.652 SO libspdk_vmd.so.6.0 00:02:49.652 LIB libspdk_idxd.a 00:02:49.652 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.652 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.652 SO libspdk_idxd.so.12.1 00:02:49.652 SYMLINK libspdk_vmd.so 00:02:49.652 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.652 SYMLINK libspdk_idxd.so 00:02:49.911 LIB libspdk_rdma_provider.a 00:02:49.911 SO libspdk_rdma_provider.so.7.0 00:02:49.911 SYMLINK libspdk_rdma_provider.so 00:02:49.911 LIB libspdk_jsonrpc.a 00:02:50.170 SO libspdk_jsonrpc.so.6.0 00:02:50.170 SYMLINK libspdk_jsonrpc.so 00:02:50.739 CC lib/rpc/rpc.o 00:02:50.739 LIB libspdk_env_dpdk.a 00:02:50.739 SO libspdk_env_dpdk.so.15.1 00:02:50.739 LIB libspdk_rpc.a 00:02:51.028 SO libspdk_rpc.so.6.0 00:02:51.028 SYMLINK libspdk_env_dpdk.so 00:02:51.028 SYMLINK libspdk_rpc.so 00:02:51.322 CC lib/keyring/keyring.o 00:02:51.322 CC lib/keyring/keyring_rpc.o 00:02:51.322 CC lib/notify/notify.o 00:02:51.322 CC lib/notify/notify_rpc.o 00:02:51.322 CC lib/trace/trace.o 00:02:51.322 CC lib/trace/trace_flags.o 00:02:51.322 CC lib/trace/trace_rpc.o 00:02:51.581 LIB libspdk_notify.a 00:02:51.581 SO libspdk_notify.so.6.0 00:02:51.581 LIB libspdk_keyring.a 00:02:51.581 SYMLINK libspdk_notify.so 00:02:51.581 LIB libspdk_trace.a 00:02:51.581 SO libspdk_keyring.so.2.0 00:02:51.581 SO libspdk_trace.so.11.0 00:02:51.841 SYMLINK libspdk_keyring.so 00:02:51.841 SYMLINK libspdk_trace.so 00:02:52.100 CC lib/thread/thread.o 00:02:52.100 CC lib/thread/iobuf.o 00:02:52.100 CC lib/sock/sock.o 00:02:52.360 CC lib/sock/sock_rpc.o 00:02:52.620 LIB libspdk_sock.a 00:02:52.620 SO libspdk_sock.so.10.0 00:02:52.880 SYMLINK libspdk_sock.so 00:02:53.449 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:53.449 CC lib/nvme/nvme_ctrlr.o 00:02:53.449 CC lib/nvme/nvme_fabric.o 00:02:53.449 CC lib/nvme/nvme_ns_cmd.o 00:02:53.449 CC lib/nvme/nvme_ns.o 00:02:53.449 CC lib/nvme/nvme_pcie_common.o 00:02:53.449 CC lib/nvme/nvme_pcie.o 00:02:53.449 CC lib/nvme/nvme_qpair.o 00:02:53.449 CC lib/nvme/nvme.o 00:02:54.018 LIB libspdk_thread.a 00:02:54.018 CC lib/nvme/nvme_quirks.o 00:02:54.018 CC lib/nvme/nvme_transport.o 00:02:54.018 SO libspdk_thread.so.11.0 00:02:54.018 CC lib/nvme/nvme_discovery.o 00:02:54.277 SYMLINK libspdk_thread.so 00:02:54.277 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.277 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.277 CC lib/nvme/nvme_tcp.o 00:02:54.277 CC lib/nvme/nvme_opal.o 00:02:54.277 CC lib/accel/accel.o 00:02:54.536 CC lib/nvme/nvme_io_msg.o 00:02:54.536 CC lib/nvme/nvme_poll_group.o 00:02:54.536 CC lib/nvme/nvme_zns.o 00:02:54.795 CC lib/accel/accel_rpc.o 00:02:54.795 CC lib/nvme/nvme_stubs.o 00:02:54.795 CC lib/nvme/nvme_auth.o 00:02:54.795 CC lib/nvme/nvme_cuse.o 00:02:54.795 CC lib/nvme/nvme_rdma.o 00:02:55.054 CC lib/blob/blobstore.o 00:02:55.313 CC lib/blob/request.o 00:02:55.313 CC lib/blob/zeroes.o 00:02:55.313 CC lib/blob/blob_bs_dev.o 00:02:55.313 CC lib/accel/accel_sw.o 00:02:55.572 CC lib/init/json_config.o 00:02:55.573 CC lib/init/subsystem.o 00:02:55.573 CC lib/virtio/virtio.o 00:02:55.833 CC lib/virtio/virtio_vhost_user.o 00:02:55.833 LIB libspdk_accel.a 00:02:55.833 CC lib/init/subsystem_rpc.o 00:02:55.833 CC lib/init/rpc.o 00:02:55.833 CC lib/virtio/virtio_vfio_user.o 00:02:55.833 SO libspdk_accel.so.16.0 00:02:55.833 SYMLINK libspdk_accel.so 00:02:55.833 CC lib/virtio/virtio_pci.o 00:02:55.833 CC lib/fsdev/fsdev.o 00:02:55.833 CC lib/fsdev/fsdev_io.o 00:02:55.833 LIB libspdk_init.a 00:02:56.092 CC lib/fsdev/fsdev_rpc.o 00:02:56.092 SO libspdk_init.so.6.0 00:02:56.092 SYMLINK libspdk_init.so 00:02:56.092 CC lib/bdev/bdev.o 00:02:56.092 CC lib/bdev/bdev_rpc.o 00:02:56.092 CC lib/bdev/bdev_zone.o 00:02:56.092 CC lib/bdev/part.o 00:02:56.352 LIB libspdk_virtio.a 00:02:56.352 CC lib/bdev/scsi_nvme.o 00:02:56.352 SO libspdk_virtio.so.7.0 00:02:56.352 CC lib/event/app.o 00:02:56.352 CC lib/event/reactor.o 00:02:56.352 CC lib/event/log_rpc.o 00:02:56.352 SYMLINK libspdk_virtio.so 00:02:56.352 CC lib/event/app_rpc.o 00:02:56.352 CC lib/event/scheduler_static.o 00:02:56.611 LIB libspdk_nvme.a 00:02:56.611 LIB libspdk_fsdev.a 00:02:56.611 SO libspdk_fsdev.so.2.0 00:02:56.871 SYMLINK libspdk_fsdev.so 00:02:56.871 SO libspdk_nvme.so.15.0 00:02:56.871 LIB libspdk_event.a 00:02:56.871 SO libspdk_event.so.14.0 00:02:57.130 SYMLINK libspdk_event.so 00:02:57.130 SYMLINK libspdk_nvme.so 00:02:57.130 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.728 LIB libspdk_fuse_dispatcher.a 00:02:57.988 SO libspdk_fuse_dispatcher.so.1.0 00:02:57.988 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.928 LIB libspdk_blob.a 00:02:58.928 SO libspdk_blob.so.12.0 00:02:59.189 SYMLINK libspdk_blob.so 00:02:59.189 LIB libspdk_bdev.a 00:02:59.189 SO libspdk_bdev.so.17.0 00:02:59.450 SYMLINK libspdk_bdev.so 00:02:59.450 CC lib/lvol/lvol.o 00:02:59.450 CC lib/blobfs/blobfs.o 00:02:59.450 CC lib/blobfs/tree.o 00:02:59.710 CC lib/nbd/nbd.o 00:02:59.710 CC lib/nbd/nbd_rpc.o 00:02:59.710 CC lib/nvmf/ctrlr.o 00:02:59.710 CC lib/nvmf/ctrlr_discovery.o 00:02:59.710 CC lib/ftl/ftl_core.o 00:02:59.710 CC lib/scsi/dev.o 00:02:59.710 CC lib/ublk/ublk.o 00:02:59.710 CC lib/nvmf/ctrlr_bdev.o 00:02:59.710 CC lib/nvmf/subsystem.o 00:02:59.969 CC lib/scsi/lun.o 00:02:59.969 LIB libspdk_nbd.a 00:02:59.969 CC lib/ftl/ftl_init.o 00:02:59.969 SO libspdk_nbd.so.7.0 00:03:00.228 SYMLINK libspdk_nbd.so 00:03:00.228 CC lib/ublk/ublk_rpc.o 00:03:00.228 CC lib/nvmf/nvmf.o 00:03:00.228 CC lib/scsi/port.o 00:03:00.228 CC lib/ftl/ftl_layout.o 00:03:00.228 CC lib/scsi/scsi.o 00:03:00.228 LIB libspdk_ublk.a 00:03:00.228 CC lib/ftl/ftl_debug.o 00:03:00.488 SO libspdk_ublk.so.3.0 00:03:00.488 LIB libspdk_blobfs.a 00:03:00.488 SYMLINK libspdk_ublk.so 00:03:00.488 CC lib/nvmf/nvmf_rpc.o 00:03:00.488 SO libspdk_blobfs.so.11.0 00:03:00.488 CC lib/scsi/scsi_bdev.o 00:03:00.488 CC lib/ftl/ftl_io.o 00:03:00.488 SYMLINK libspdk_blobfs.so 00:03:00.488 CC lib/nvmf/transport.o 00:03:00.488 LIB libspdk_lvol.a 00:03:00.488 CC lib/ftl/ftl_sb.o 00:03:00.488 CC lib/nvmf/tcp.o 00:03:00.488 SO libspdk_lvol.so.11.0 00:03:00.746 SYMLINK libspdk_lvol.so 00:03:00.746 CC lib/nvmf/stubs.o 00:03:00.746 CC lib/ftl/ftl_l2p.o 00:03:00.746 CC lib/ftl/ftl_l2p_flat.o 00:03:01.005 CC lib/nvmf/mdns_server.o 00:03:01.006 CC lib/ftl/ftl_nv_cache.o 00:03:01.006 CC lib/scsi/scsi_pr.o 00:03:01.266 CC lib/nvmf/rdma.o 00:03:01.266 CC lib/nvmf/auth.o 00:03:01.266 CC lib/scsi/scsi_rpc.o 00:03:01.266 CC lib/scsi/task.o 00:03:01.266 CC lib/ftl/ftl_band.o 00:03:01.525 CC lib/ftl/ftl_band_ops.o 00:03:01.525 CC lib/ftl/ftl_writer.o 00:03:01.525 CC lib/ftl/ftl_rq.o 00:03:01.525 LIB libspdk_scsi.a 00:03:01.525 SO libspdk_scsi.so.9.0 00:03:01.785 SYMLINK libspdk_scsi.so 00:03:01.785 CC lib/ftl/ftl_reloc.o 00:03:01.785 CC lib/ftl/ftl_l2p_cache.o 00:03:01.785 CC lib/ftl/ftl_p2l.o 00:03:01.785 CC lib/ftl/ftl_p2l_log.o 00:03:01.785 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.045 CC lib/iscsi/conn.o 00:03:02.045 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.045 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.045 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.045 CC lib/iscsi/init_grp.o 00:03:02.045 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.304 CC lib/vhost/vhost.o 00:03:02.304 CC lib/vhost/vhost_rpc.o 00:03:02.304 CC lib/vhost/vhost_scsi.o 00:03:02.304 CC lib/vhost/vhost_blk.o 00:03:02.304 CC lib/vhost/rte_vhost_user.o 00:03:02.304 CC lib/iscsi/iscsi.o 00:03:02.563 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.563 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.824 CC lib/iscsi/param.o 00:03:02.824 CC lib/iscsi/portal_grp.o 00:03:02.824 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.083 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.083 CC lib/iscsi/tgt_node.o 00:03:03.083 CC lib/iscsi/iscsi_subsystem.o 00:03:03.083 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.083 CC lib/iscsi/iscsi_rpc.o 00:03:03.344 CC lib/iscsi/task.o 00:03:03.344 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.344 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.344 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.344 LIB libspdk_vhost.a 00:03:03.344 CC lib/ftl/utils/ftl_conf.o 00:03:03.344 CC lib/ftl/utils/ftl_md.o 00:03:03.604 SO libspdk_vhost.so.8.0 00:03:03.604 CC lib/ftl/utils/ftl_mempool.o 00:03:03.604 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.604 CC lib/ftl/utils/ftl_property.o 00:03:03.604 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.605 SYMLINK libspdk_vhost.so 00:03:03.605 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.605 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.865 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.865 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.865 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.865 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.865 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.865 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.865 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.865 LIB libspdk_nvmf.a 00:03:03.865 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.865 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:03.865 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:03.865 CC lib/ftl/base/ftl_base_dev.o 00:03:04.142 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.142 SO libspdk_nvmf.so.20.0 00:03:04.142 CC lib/ftl/ftl_trace.o 00:03:04.142 LIB libspdk_iscsi.a 00:03:04.142 SO libspdk_iscsi.so.8.0 00:03:04.431 SYMLINK libspdk_nvmf.so 00:03:04.431 LIB libspdk_ftl.a 00:03:04.431 SYMLINK libspdk_iscsi.so 00:03:04.690 SO libspdk_ftl.so.9.0 00:03:04.949 SYMLINK libspdk_ftl.so 00:03:05.209 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.469 CC module/keyring/file/keyring.o 00:03:05.469 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.469 CC module/keyring/linux/keyring.o 00:03:05.469 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.469 CC module/fsdev/aio/fsdev_aio.o 00:03:05.469 CC module/accel/error/accel_error.o 00:03:05.469 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.469 CC module/blob/bdev/blob_bdev.o 00:03:05.469 CC module/sock/posix/posix.o 00:03:05.469 LIB libspdk_env_dpdk_rpc.a 00:03:05.469 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.469 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.469 CC module/accel/error/accel_error_rpc.o 00:03:05.469 CC module/keyring/linux/keyring_rpc.o 00:03:05.469 CC module/keyring/file/keyring_rpc.o 00:03:05.469 LIB libspdk_scheduler_gscheduler.a 00:03:05.469 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.469 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.469 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.469 LIB libspdk_scheduler_dynamic.a 00:03:05.730 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:05.730 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.730 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.730 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.730 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.730 LIB libspdk_keyring_linux.a 00:03:05.730 LIB libspdk_accel_error.a 00:03:05.730 LIB libspdk_keyring_file.a 00:03:05.730 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.730 LIB libspdk_blob_bdev.a 00:03:05.730 SO libspdk_keyring_linux.so.1.0 00:03:05.730 SO libspdk_accel_error.so.2.0 00:03:05.730 SO libspdk_keyring_file.so.2.0 00:03:05.730 SO libspdk_blob_bdev.so.12.0 00:03:05.730 SYMLINK libspdk_keyring_linux.so 00:03:05.730 SYMLINK libspdk_accel_error.so 00:03:05.730 SYMLINK libspdk_blob_bdev.so 00:03:05.730 SYMLINK libspdk_keyring_file.so 00:03:05.730 CC module/accel/ioat/accel_ioat.o 00:03:05.730 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.730 CC module/accel/dsa/accel_dsa.o 00:03:05.730 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.990 CC module/accel/iaa/accel_iaa.o 00:03:05.990 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.990 LIB libspdk_accel_ioat.a 00:03:05.990 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.990 CC module/bdev/delay/vbdev_delay.o 00:03:05.990 CC module/bdev/error/vbdev_error.o 00:03:05.990 SO libspdk_accel_ioat.so.6.0 00:03:05.990 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.249 SYMLINK libspdk_accel_ioat.so 00:03:06.249 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.249 LIB libspdk_accel_dsa.a 00:03:06.249 LIB libspdk_accel_iaa.a 00:03:06.249 LIB libspdk_fsdev_aio.a 00:03:06.249 SO libspdk_accel_dsa.so.5.0 00:03:06.249 SO libspdk_fsdev_aio.so.1.0 00:03:06.249 SO libspdk_accel_iaa.so.3.0 00:03:06.249 CC module/bdev/gpt/gpt.o 00:03:06.249 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.249 LIB libspdk_sock_posix.a 00:03:06.249 SYMLINK libspdk_accel_dsa.so 00:03:06.249 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.249 SYMLINK libspdk_fsdev_aio.so 00:03:06.249 SO libspdk_sock_posix.so.6.0 00:03:06.249 SYMLINK libspdk_accel_iaa.so 00:03:06.249 LIB libspdk_bdev_error.a 00:03:06.249 SO libspdk_bdev_error.so.6.0 00:03:06.509 SYMLINK libspdk_sock_posix.so 00:03:06.509 SYMLINK libspdk_bdev_error.so 00:03:06.509 LIB libspdk_blobfs_bdev.a 00:03:06.509 LIB libspdk_bdev_delay.a 00:03:06.509 SO libspdk_blobfs_bdev.so.6.0 00:03:06.509 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.509 CC module/bdev/null/bdev_null.o 00:03:06.509 CC module/bdev/malloc/bdev_malloc.o 00:03:06.509 SO libspdk_bdev_delay.so.6.0 00:03:06.509 CC module/bdev/nvme/bdev_nvme.o 00:03:06.509 SYMLINK libspdk_blobfs_bdev.so 00:03:06.509 SYMLINK libspdk_bdev_delay.so 00:03:06.509 CC module/bdev/raid/bdev_raid.o 00:03:06.509 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.509 LIB libspdk_bdev_gpt.a 00:03:06.509 CC module/bdev/split/vbdev_split.o 00:03:06.509 SO libspdk_bdev_gpt.so.6.0 00:03:06.768 SYMLINK libspdk_bdev_gpt.so 00:03:06.768 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.768 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.768 CC module/bdev/aio/bdev_aio.o 00:03:06.768 CC module/bdev/null/bdev_null_rpc.o 00:03:06.768 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.027 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.027 LIB libspdk_bdev_split.a 00:03:07.027 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.027 SO libspdk_bdev_split.so.6.0 00:03:07.027 LIB libspdk_bdev_null.a 00:03:07.027 SO libspdk_bdev_null.so.6.0 00:03:07.027 SYMLINK libspdk_bdev_split.so 00:03:07.027 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.027 CC module/bdev/nvme/nvme_rpc.o 00:03:07.027 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.027 SYMLINK libspdk_bdev_null.so 00:03:07.027 LIB libspdk_bdev_malloc.a 00:03:07.027 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.027 LIB libspdk_bdev_passthru.a 00:03:07.027 LIB libspdk_bdev_zone_block.a 00:03:07.027 SO libspdk_bdev_malloc.so.6.0 00:03:07.027 SO libspdk_bdev_passthru.so.6.0 00:03:07.027 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.027 SO libspdk_bdev_zone_block.so.6.0 00:03:07.287 SYMLINK libspdk_bdev_malloc.so 00:03:07.287 SYMLINK libspdk_bdev_passthru.so 00:03:07.287 SYMLINK libspdk_bdev_zone_block.so 00:03:07.287 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.287 CC module/bdev/nvme/vbdev_opal.o 00:03:07.287 LIB libspdk_bdev_aio.a 00:03:07.287 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.287 SO libspdk_bdev_aio.so.6.0 00:03:07.287 CC module/bdev/ftl/bdev_ftl.o 00:03:07.287 SYMLINK libspdk_bdev_aio.so 00:03:07.287 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.287 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.546 LIB libspdk_bdev_lvol.a 00:03:07.546 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.546 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.546 SO libspdk_bdev_lvol.so.6.0 00:03:07.546 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.546 SYMLINK libspdk_bdev_lvol.so 00:03:07.546 CC module/bdev/raid/raid0.o 00:03:07.806 CC module/bdev/raid/raid1.o 00:03:07.806 CC module/bdev/raid/concat.o 00:03:07.806 CC module/bdev/raid/raid5f.o 00:03:07.806 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.806 LIB libspdk_bdev_ftl.a 00:03:07.806 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.806 SO libspdk_bdev_ftl.so.6.0 00:03:07.806 LIB libspdk_bdev_iscsi.a 00:03:07.806 SO libspdk_bdev_iscsi.so.6.0 00:03:07.806 SYMLINK libspdk_bdev_ftl.so 00:03:07.806 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.806 SYMLINK libspdk_bdev_iscsi.so 00:03:08.376 LIB libspdk_bdev_raid.a 00:03:08.376 LIB libspdk_bdev_virtio.a 00:03:08.376 SO libspdk_bdev_virtio.so.6.0 00:03:08.376 SO libspdk_bdev_raid.so.6.0 00:03:08.376 SYMLINK libspdk_bdev_virtio.so 00:03:08.376 SYMLINK libspdk_bdev_raid.so 00:03:09.758 LIB libspdk_bdev_nvme.a 00:03:09.758 SO libspdk_bdev_nvme.so.7.1 00:03:09.758 SYMLINK libspdk_bdev_nvme.so 00:03:10.339 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.339 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.339 CC module/event/subsystems/fsdev/fsdev.o 00:03:10.339 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.339 CC module/event/subsystems/sock/sock.o 00:03:10.339 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.339 CC module/event/subsystems/keyring/keyring.o 00:03:10.339 CC module/event/subsystems/vmd/vmd.o 00:03:10.339 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.599 LIB libspdk_event_keyring.a 00:03:10.599 LIB libspdk_event_fsdev.a 00:03:10.599 LIB libspdk_event_sock.a 00:03:10.599 LIB libspdk_event_scheduler.a 00:03:10.599 LIB libspdk_event_vmd.a 00:03:10.599 LIB libspdk_event_vhost_blk.a 00:03:10.599 LIB libspdk_event_iobuf.a 00:03:10.599 SO libspdk_event_fsdev.so.1.0 00:03:10.599 SO libspdk_event_sock.so.5.0 00:03:10.599 SO libspdk_event_keyring.so.1.0 00:03:10.599 SO libspdk_event_scheduler.so.4.0 00:03:10.599 SO libspdk_event_vhost_blk.so.3.0 00:03:10.599 SO libspdk_event_vmd.so.6.0 00:03:10.599 SO libspdk_event_iobuf.so.3.0 00:03:10.599 SYMLINK libspdk_event_keyring.so 00:03:10.599 SYMLINK libspdk_event_sock.so 00:03:10.599 SYMLINK libspdk_event_scheduler.so 00:03:10.599 SYMLINK libspdk_event_fsdev.so 00:03:10.599 SYMLINK libspdk_event_vhost_blk.so 00:03:10.599 SYMLINK libspdk_event_vmd.so 00:03:10.599 SYMLINK libspdk_event_iobuf.so 00:03:11.172 CC module/event/subsystems/accel/accel.o 00:03:11.172 LIB libspdk_event_accel.a 00:03:11.172 SO libspdk_event_accel.so.6.0 00:03:11.433 SYMLINK libspdk_event_accel.so 00:03:11.692 CC module/event/subsystems/bdev/bdev.o 00:03:11.952 LIB libspdk_event_bdev.a 00:03:11.952 SO libspdk_event_bdev.so.6.0 00:03:12.212 SYMLINK libspdk_event_bdev.so 00:03:12.473 CC module/event/subsystems/ublk/ublk.o 00:03:12.473 CC module/event/subsystems/scsi/scsi.o 00:03:12.473 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.473 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.473 CC module/event/subsystems/nbd/nbd.o 00:03:12.734 LIB libspdk_event_ublk.a 00:03:12.734 LIB libspdk_event_nbd.a 00:03:12.734 SO libspdk_event_ublk.so.3.0 00:03:12.734 LIB libspdk_event_scsi.a 00:03:12.734 SO libspdk_event_nbd.so.6.0 00:03:12.734 SO libspdk_event_scsi.so.6.0 00:03:12.734 LIB libspdk_event_nvmf.a 00:03:12.734 SYMLINK libspdk_event_ublk.so 00:03:12.734 SYMLINK libspdk_event_nbd.so 00:03:12.734 SYMLINK libspdk_event_scsi.so 00:03:12.734 SO libspdk_event_nvmf.so.6.0 00:03:12.995 SYMLINK libspdk_event_nvmf.so 00:03:13.254 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.254 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.254 LIB libspdk_event_vhost_scsi.a 00:03:13.254 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.514 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.514 LIB libspdk_event_iscsi.a 00:03:13.514 SO libspdk_event_iscsi.so.6.0 00:03:13.514 SYMLINK libspdk_event_iscsi.so 00:03:13.773 SO libspdk.so.6.0 00:03:13.773 SYMLINK libspdk.so 00:03:14.032 CXX app/trace/trace.o 00:03:14.032 CC app/trace_record/trace_record.o 00:03:14.032 TEST_HEADER include/spdk/accel.h 00:03:14.032 TEST_HEADER include/spdk/accel_module.h 00:03:14.032 TEST_HEADER include/spdk/assert.h 00:03:14.032 TEST_HEADER include/spdk/barrier.h 00:03:14.032 TEST_HEADER include/spdk/base64.h 00:03:14.032 TEST_HEADER include/spdk/bdev.h 00:03:14.032 TEST_HEADER include/spdk/bdev_module.h 00:03:14.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.032 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.032 TEST_HEADER include/spdk/bit_array.h 00:03:14.032 TEST_HEADER include/spdk/bit_pool.h 00:03:14.032 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.032 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.032 TEST_HEADER include/spdk/blobfs.h 00:03:14.032 TEST_HEADER include/spdk/blob.h 00:03:14.291 TEST_HEADER include/spdk/conf.h 00:03:14.291 TEST_HEADER include/spdk/config.h 00:03:14.291 TEST_HEADER include/spdk/cpuset.h 00:03:14.291 TEST_HEADER include/spdk/crc16.h 00:03:14.291 TEST_HEADER include/spdk/crc32.h 00:03:14.291 TEST_HEADER include/spdk/crc64.h 00:03:14.291 TEST_HEADER include/spdk/dif.h 00:03:14.291 TEST_HEADER include/spdk/dma.h 00:03:14.291 TEST_HEADER include/spdk/endian.h 00:03:14.291 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.291 TEST_HEADER include/spdk/env.h 00:03:14.291 TEST_HEADER include/spdk/event.h 00:03:14.291 TEST_HEADER include/spdk/fd_group.h 00:03:14.291 TEST_HEADER include/spdk/fd.h 00:03:14.291 TEST_HEADER include/spdk/file.h 00:03:14.291 CC examples/util/zipf/zipf.o 00:03:14.291 TEST_HEADER include/spdk/fsdev.h 00:03:14.291 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.291 TEST_HEADER include/spdk/ftl.h 00:03:14.291 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.291 TEST_HEADER include/spdk/hexlify.h 00:03:14.291 TEST_HEADER include/spdk/histogram_data.h 00:03:14.291 TEST_HEADER include/spdk/idxd.h 00:03:14.291 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.291 TEST_HEADER include/spdk/init.h 00:03:14.291 TEST_HEADER include/spdk/ioat.h 00:03:14.291 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.291 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.291 CC test/thread/poller_perf/poller_perf.o 00:03:14.291 TEST_HEADER include/spdk/json.h 00:03:14.291 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.291 TEST_HEADER include/spdk/keyring.h 00:03:14.291 TEST_HEADER include/spdk/keyring_module.h 00:03:14.291 TEST_HEADER include/spdk/likely.h 00:03:14.291 CC examples/ioat/perf/perf.o 00:03:14.291 TEST_HEADER include/spdk/log.h 00:03:14.291 TEST_HEADER include/spdk/lvol.h 00:03:14.291 TEST_HEADER include/spdk/md5.h 00:03:14.291 TEST_HEADER include/spdk/memory.h 00:03:14.291 TEST_HEADER include/spdk/mmio.h 00:03:14.291 TEST_HEADER include/spdk/nbd.h 00:03:14.291 CC test/dma/test_dma/test_dma.o 00:03:14.291 CC test/app/bdev_svc/bdev_svc.o 00:03:14.291 TEST_HEADER include/spdk/net.h 00:03:14.291 TEST_HEADER include/spdk/notify.h 00:03:14.291 TEST_HEADER include/spdk/nvme.h 00:03:14.291 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.291 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.291 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.291 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.291 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.291 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.291 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.291 TEST_HEADER include/spdk/nvmf.h 00:03:14.291 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.291 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.291 TEST_HEADER include/spdk/opal.h 00:03:14.291 TEST_HEADER include/spdk/opal_spec.h 00:03:14.291 TEST_HEADER include/spdk/pci_ids.h 00:03:14.291 TEST_HEADER include/spdk/pipe.h 00:03:14.291 TEST_HEADER include/spdk/queue.h 00:03:14.291 TEST_HEADER include/spdk/reduce.h 00:03:14.291 TEST_HEADER include/spdk/rpc.h 00:03:14.291 TEST_HEADER include/spdk/scheduler.h 00:03:14.291 TEST_HEADER include/spdk/scsi.h 00:03:14.291 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.291 TEST_HEADER include/spdk/sock.h 00:03:14.291 TEST_HEADER include/spdk/stdinc.h 00:03:14.291 TEST_HEADER include/spdk/string.h 00:03:14.291 TEST_HEADER include/spdk/thread.h 00:03:14.291 TEST_HEADER include/spdk/trace.h 00:03:14.291 TEST_HEADER include/spdk/trace_parser.h 00:03:14.291 TEST_HEADER include/spdk/tree.h 00:03:14.291 TEST_HEADER include/spdk/ublk.h 00:03:14.291 TEST_HEADER include/spdk/util.h 00:03:14.291 TEST_HEADER include/spdk/uuid.h 00:03:14.291 TEST_HEADER include/spdk/version.h 00:03:14.291 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.291 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.291 TEST_HEADER include/spdk/vhost.h 00:03:14.291 TEST_HEADER include/spdk/vmd.h 00:03:14.291 TEST_HEADER include/spdk/xor.h 00:03:14.291 TEST_HEADER include/spdk/zipf.h 00:03:14.291 CXX test/cpp_headers/accel.o 00:03:14.291 LINK interrupt_tgt 00:03:14.291 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.291 LINK zipf 00:03:14.291 LINK poller_perf 00:03:14.291 LINK spdk_trace_record 00:03:14.551 LINK bdev_svc 00:03:14.551 LINK ioat_perf 00:03:14.551 CXX test/cpp_headers/accel_module.o 00:03:14.551 LINK spdk_trace 00:03:14.551 CC examples/ioat/verify/verify.o 00:03:14.811 CC test/rpc_client/rpc_client_test.o 00:03:14.811 CXX test/cpp_headers/assert.o 00:03:14.811 CC app/nvmf_tgt/nvmf_main.o 00:03:14.811 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.811 LINK test_dma 00:03:14.811 CC app/spdk_tgt/spdk_tgt.o 00:03:14.811 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.811 CXX test/cpp_headers/barrier.o 00:03:14.811 LINK verify 00:03:14.811 CC test/event/event_perf/event_perf.o 00:03:14.811 LINK rpc_client_test 00:03:14.811 LINK nvmf_tgt 00:03:14.811 LINK mem_callbacks 00:03:14.811 LINK iscsi_tgt 00:03:15.071 LINK spdk_tgt 00:03:15.071 CXX test/cpp_headers/base64.o 00:03:15.071 LINK event_perf 00:03:15.071 CC test/event/reactor/reactor.o 00:03:15.071 CXX test/cpp_headers/bdev.o 00:03:15.071 CC app/spdk_lspci/spdk_lspci.o 00:03:15.071 CC test/env/vtophys/vtophys.o 00:03:15.331 CC examples/thread/thread/thread_ex.o 00:03:15.331 LINK nvme_fuzz 00:03:15.331 LINK reactor 00:03:15.331 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.331 LINK spdk_lspci 00:03:15.331 CC test/blobfs/mkfs/mkfs.o 00:03:15.331 CC test/accel/dif/dif.o 00:03:15.331 LINK vtophys 00:03:15.331 CXX test/cpp_headers/bdev_module.o 00:03:15.591 CC test/lvol/esnap/esnap.o 00:03:15.591 LINK thread 00:03:15.591 LINK mkfs 00:03:15.591 CXX test/cpp_headers/bdev_zone.o 00:03:15.591 CC test/event/reactor_perf/reactor_perf.o 00:03:15.591 CC app/spdk_nvme_perf/perf.o 00:03:15.591 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.591 CC test/nvme/aer/aer.o 00:03:15.851 LINK reactor_perf 00:03:15.851 CXX test/cpp_headers/bit_array.o 00:03:15.851 LINK env_dpdk_post_init 00:03:15.851 CC examples/sock/hello_world/hello_sock.o 00:03:15.851 CXX test/cpp_headers/bit_pool.o 00:03:15.851 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.851 CC test/event/app_repeat/app_repeat.o 00:03:15.851 LINK aer 00:03:16.110 CXX test/cpp_headers/blob_bdev.o 00:03:16.110 LINK lsvmd 00:03:16.110 CC test/env/memory/memory_ut.o 00:03:16.110 LINK dif 00:03:16.110 LINK app_repeat 00:03:16.110 LINK hello_sock 00:03:16.110 CC test/nvme/reset/reset.o 00:03:16.370 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.370 CC examples/vmd/led/led.o 00:03:16.370 CC test/env/pci/pci_ut.o 00:03:16.370 CC app/spdk_nvme_identify/identify.o 00:03:16.370 CXX test/cpp_headers/blobfs.o 00:03:16.370 LINK led 00:03:16.370 LINK reset 00:03:16.370 CC test/event/scheduler/scheduler.o 00:03:16.629 LINK spdk_nvme_perf 00:03:16.629 CXX test/cpp_headers/blob.o 00:03:16.629 LINK scheduler 00:03:16.889 CC test/nvme/sgl/sgl.o 00:03:16.889 CXX test/cpp_headers/conf.o 00:03:16.889 LINK pci_ut 00:03:16.889 CC examples/idxd/perf/perf.o 00:03:16.889 CC test/nvme/e2edp/nvme_dp.o 00:03:16.889 CXX test/cpp_headers/config.o 00:03:16.889 CXX test/cpp_headers/cpuset.o 00:03:17.149 LINK sgl 00:03:17.149 CC test/nvme/overhead/overhead.o 00:03:17.149 CXX test/cpp_headers/crc16.o 00:03:17.149 CC test/nvme/err_injection/err_injection.o 00:03:17.149 LINK nvme_dp 00:03:17.149 LINK iscsi_fuzz 00:03:17.149 LINK idxd_perf 00:03:17.149 LINK memory_ut 00:03:17.409 CXX test/cpp_headers/crc32.o 00:03:17.409 LINK overhead 00:03:17.409 LINK spdk_nvme_identify 00:03:17.409 LINK err_injection 00:03:17.409 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.409 CXX test/cpp_headers/crc64.o 00:03:17.409 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.409 CC examples/accel/perf/accel_perf.o 00:03:17.668 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.668 CC test/nvme/startup/startup.o 00:03:17.668 CXX test/cpp_headers/dif.o 00:03:17.668 CC test/bdev/bdevio/bdevio.o 00:03:17.668 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.668 CC examples/nvme/hello_world/hello_world.o 00:03:17.668 LINK hello_fsdev 00:03:17.668 CC examples/blob/hello_world/hello_blob.o 00:03:17.668 CXX test/cpp_headers/dma.o 00:03:17.668 LINK spdk_nvme_discover 00:03:17.926 LINK startup 00:03:17.926 LINK hello_world 00:03:17.926 LINK hello_blob 00:03:17.926 CXX test/cpp_headers/endian.o 00:03:17.926 CC app/spdk_top/spdk_top.o 00:03:17.926 LINK bdevio 00:03:17.926 CC examples/blob/cli/blobcli.o 00:03:18.185 CXX test/cpp_headers/env_dpdk.o 00:03:18.185 LINK accel_perf 00:03:18.185 CC test/nvme/reserve/reserve.o 00:03:18.185 LINK vhost_fuzz 00:03:18.185 CC examples/nvme/reconnect/reconnect.o 00:03:18.185 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.185 CXX test/cpp_headers/env.o 00:03:18.445 LINK reserve 00:03:18.445 CC examples/nvme/arbitration/arbitration.o 00:03:18.445 CC test/app/histogram_perf/histogram_perf.o 00:03:18.445 CC examples/nvme/hotplug/hotplug.o 00:03:18.445 CXX test/cpp_headers/event.o 00:03:18.445 LINK histogram_perf 00:03:18.445 LINK reconnect 00:03:18.445 LINK blobcli 00:03:18.704 CXX test/cpp_headers/fd_group.o 00:03:18.704 LINK hotplug 00:03:18.704 CC test/nvme/simple_copy/simple_copy.o 00:03:18.704 LINK arbitration 00:03:18.704 CC test/app/jsoncat/jsoncat.o 00:03:18.704 LINK nvme_manage 00:03:18.704 CXX test/cpp_headers/fd.o 00:03:18.704 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.963 CC examples/nvme/abort/abort.o 00:03:18.963 LINK jsoncat 00:03:18.963 LINK simple_copy 00:03:18.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.963 CXX test/cpp_headers/file.o 00:03:18.963 LINK cmb_copy 00:03:18.963 CC app/vhost/vhost.o 00:03:18.963 LINK spdk_top 00:03:18.963 CC app/spdk_dd/spdk_dd.o 00:03:18.963 LINK pmr_persistence 00:03:18.963 CC test/app/stub/stub.o 00:03:19.221 CXX test/cpp_headers/fsdev.o 00:03:19.221 CXX test/cpp_headers/fsdev_module.o 00:03:19.221 CC test/nvme/connect_stress/connect_stress.o 00:03:19.221 CXX test/cpp_headers/ftl.o 00:03:19.221 LINK vhost 00:03:19.221 CXX test/cpp_headers/gpt_spec.o 00:03:19.221 LINK abort 00:03:19.221 LINK stub 00:03:19.221 LINK connect_stress 00:03:19.221 CXX test/cpp_headers/hexlify.o 00:03:19.480 CC test/nvme/boot_partition/boot_partition.o 00:03:19.480 CXX test/cpp_headers/histogram_data.o 00:03:19.480 CXX test/cpp_headers/idxd.o 00:03:19.480 LINK spdk_dd 00:03:19.480 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.480 CC test/nvme/compliance/nvme_compliance.o 00:03:19.480 LINK boot_partition 00:03:19.480 CXX test/cpp_headers/idxd_spec.o 00:03:19.480 CXX test/cpp_headers/init.o 00:03:19.738 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.738 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.738 CXX test/cpp_headers/ioat.o 00:03:19.738 CC app/fio/nvme/fio_plugin.o 00:03:19.738 CXX test/cpp_headers/ioat_spec.o 00:03:19.738 LINK hello_bdev 00:03:19.738 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.738 CC app/fio/bdev/fio_plugin.o 00:03:19.738 CXX test/cpp_headers/iscsi_spec.o 00:03:19.738 LINK fused_ordering 00:03:19.997 LINK nvme_compliance 00:03:19.997 CC test/nvme/fdp/fdp.o 00:03:19.997 LINK doorbell_aers 00:03:19.997 CXX test/cpp_headers/json.o 00:03:19.997 CXX test/cpp_headers/jsonrpc.o 00:03:19.997 CC test/nvme/cuse/cuse.o 00:03:19.997 CXX test/cpp_headers/keyring.o 00:03:20.256 CXX test/cpp_headers/keyring_module.o 00:03:20.256 CXX test/cpp_headers/likely.o 00:03:20.256 CXX test/cpp_headers/log.o 00:03:20.256 CXX test/cpp_headers/lvol.o 00:03:20.256 CXX test/cpp_headers/md5.o 00:03:20.256 LINK spdk_bdev 00:03:20.256 CXX test/cpp_headers/memory.o 00:03:20.256 CXX test/cpp_headers/mmio.o 00:03:20.256 CXX test/cpp_headers/nbd.o 00:03:20.256 LINK spdk_nvme 00:03:20.256 LINK fdp 00:03:20.256 CXX test/cpp_headers/net.o 00:03:20.514 CXX test/cpp_headers/notify.o 00:03:20.514 CXX test/cpp_headers/nvme.o 00:03:20.514 CXX test/cpp_headers/nvme_intel.o 00:03:20.514 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.514 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.514 CXX test/cpp_headers/nvme_spec.o 00:03:20.514 CXX test/cpp_headers/nvme_zns.o 00:03:20.514 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.514 LINK bdevperf 00:03:20.514 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.514 CXX test/cpp_headers/nvmf.o 00:03:20.773 CXX test/cpp_headers/nvmf_spec.o 00:03:20.773 CXX test/cpp_headers/nvmf_transport.o 00:03:20.773 CXX test/cpp_headers/opal.o 00:03:20.773 CXX test/cpp_headers/opal_spec.o 00:03:20.773 CXX test/cpp_headers/pci_ids.o 00:03:20.773 CXX test/cpp_headers/pipe.o 00:03:20.773 CXX test/cpp_headers/queue.o 00:03:20.773 CXX test/cpp_headers/reduce.o 00:03:20.773 CXX test/cpp_headers/rpc.o 00:03:20.773 CXX test/cpp_headers/scheduler.o 00:03:20.773 CXX test/cpp_headers/scsi.o 00:03:20.773 CXX test/cpp_headers/scsi_spec.o 00:03:20.773 CXX test/cpp_headers/sock.o 00:03:21.032 CXX test/cpp_headers/stdinc.o 00:03:21.032 CC examples/nvmf/nvmf/nvmf.o 00:03:21.032 CXX test/cpp_headers/string.o 00:03:21.032 CXX test/cpp_headers/thread.o 00:03:21.032 CXX test/cpp_headers/trace.o 00:03:21.032 CXX test/cpp_headers/trace_parser.o 00:03:21.032 CXX test/cpp_headers/tree.o 00:03:21.032 CXX test/cpp_headers/ublk.o 00:03:21.032 CXX test/cpp_headers/util.o 00:03:21.032 CXX test/cpp_headers/uuid.o 00:03:21.032 CXX test/cpp_headers/version.o 00:03:21.032 CXX test/cpp_headers/vfio_user_pci.o 00:03:21.032 CXX test/cpp_headers/vfio_user_spec.o 00:03:21.291 CXX test/cpp_headers/vhost.o 00:03:21.291 CXX test/cpp_headers/vmd.o 00:03:21.291 CXX test/cpp_headers/xor.o 00:03:21.291 CXX test/cpp_headers/zipf.o 00:03:21.291 LINK nvmf 00:03:21.291 LINK cuse 00:03:21.549 LINK esnap 00:03:22.118 00:03:22.118 real 1m24.797s 00:03:22.118 user 7m32.665s 00:03:22.118 sys 1m41.550s 00:03:22.118 09:16:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:22.118 09:16:55 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.118 ************************************ 00:03:22.118 END TEST make 00:03:22.118 ************************************ 00:03:22.118 09:16:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.118 09:16:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.118 09:16:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.118 09:16:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.118 09:16:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.118 09:16:55 -- pm/common@44 -- $ pid=5472 00:03:22.118 09:16:55 -- pm/common@50 -- $ kill -TERM 5472 00:03:22.118 09:16:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.118 09:16:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.118 09:16:55 -- pm/common@44 -- $ pid=5474 00:03:22.118 09:16:55 -- pm/common@50 -- $ kill -TERM 5474 00:03:22.118 09:16:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.118 09:16:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:22.118 09:16:56 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.118 09:16:56 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.118 09:16:56 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.118 09:16:56 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.118 09:16:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.118 09:16:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.118 09:16:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.118 09:16:56 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.118 09:16:56 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.118 09:16:56 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.118 09:16:56 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.118 09:16:56 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.118 09:16:56 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.118 09:16:56 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.118 09:16:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.118 09:16:56 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.118 09:16:56 -- scripts/common.sh@345 -- # : 1 00:03:22.118 09:16:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.118 09:16:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.118 09:16:56 -- scripts/common.sh@365 -- # decimal 1 00:03:22.118 09:16:56 -- scripts/common.sh@353 -- # local d=1 00:03:22.118 09:16:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.118 09:16:56 -- scripts/common.sh@355 -- # echo 1 00:03:22.118 09:16:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.118 09:16:56 -- scripts/common.sh@366 -- # decimal 2 00:03:22.118 09:16:56 -- scripts/common.sh@353 -- # local d=2 00:03:22.118 09:16:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.118 09:16:56 -- scripts/common.sh@355 -- # echo 2 00:03:22.118 09:16:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.118 09:16:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.118 09:16:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.118 09:16:56 -- scripts/common.sh@368 -- # return 0 00:03:22.118 09:16:56 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.118 09:16:56 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.118 --rc genhtml_branch_coverage=1 00:03:22.118 --rc genhtml_function_coverage=1 00:03:22.118 --rc genhtml_legend=1 00:03:22.118 --rc geninfo_all_blocks=1 00:03:22.118 --rc geninfo_unexecuted_blocks=1 00:03:22.118 00:03:22.118 ' 00:03:22.118 09:16:56 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.118 --rc genhtml_branch_coverage=1 00:03:22.118 --rc genhtml_function_coverage=1 00:03:22.118 --rc genhtml_legend=1 00:03:22.118 --rc geninfo_all_blocks=1 00:03:22.118 --rc geninfo_unexecuted_blocks=1 00:03:22.118 00:03:22.118 ' 00:03:22.118 09:16:56 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.118 --rc genhtml_branch_coverage=1 00:03:22.118 --rc genhtml_function_coverage=1 00:03:22.118 --rc genhtml_legend=1 00:03:22.118 --rc geninfo_all_blocks=1 00:03:22.118 --rc geninfo_unexecuted_blocks=1 00:03:22.118 00:03:22.118 ' 00:03:22.118 09:16:56 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.118 --rc genhtml_branch_coverage=1 00:03:22.118 --rc genhtml_function_coverage=1 00:03:22.118 --rc genhtml_legend=1 00:03:22.118 --rc geninfo_all_blocks=1 00:03:22.118 --rc geninfo_unexecuted_blocks=1 00:03:22.118 00:03:22.118 ' 00:03:22.118 09:16:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.118 09:16:56 -- nvmf/common.sh@7 -- # uname -s 00:03:22.118 09:16:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.118 09:16:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.118 09:16:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.119 09:16:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.119 09:16:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.119 09:16:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.119 09:16:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.119 09:16:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.119 09:16:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.119 09:16:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.378 09:16:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4b033963-8381-4c36-8d4b-2a6d498e4080 00:03:22.378 09:16:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=4b033963-8381-4c36-8d4b-2a6d498e4080 00:03:22.378 09:16:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.378 09:16:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.378 09:16:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:22.378 09:16:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.378 09:16:56 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.378 09:16:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.378 09:16:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.378 09:16:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.378 09:16:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.378 09:16:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.378 09:16:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.378 09:16:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.378 09:16:56 -- paths/export.sh@5 -- # export PATH 00:03:22.378 09:16:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.378 09:16:56 -- nvmf/common.sh@51 -- # : 0 00:03:22.378 09:16:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.378 09:16:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.378 09:16:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.378 09:16:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.378 09:16:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.378 09:16:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.378 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.378 09:16:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.378 09:16:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.378 09:16:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.378 09:16:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.378 09:16:56 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.378 09:16:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.378 09:16:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.378 09:16:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.378 09:16:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.378 09:16:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.378 09:16:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.378 09:16:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.378 09:16:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.378 09:16:56 -- spdk/autotest.sh@48 -- # udevadm_pid=55615 00:03:22.378 09:16:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.378 09:16:56 -- pm/common@17 -- # local monitor 00:03:22.378 09:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.378 09:16:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.378 09:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.378 09:16:56 -- pm/common@25 -- # sleep 1 00:03:22.378 09:16:56 -- pm/common@21 -- # date +%s 00:03:22.378 09:16:56 -- pm/common@21 -- # date +%s 00:03:22.378 09:16:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733995016 00:03:22.378 09:16:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733995016 00:03:22.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733995016_collect-vmstat.pm.log 00:03:22.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733995016_collect-cpu-load.pm.log 00:03:23.315 09:16:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.315 09:16:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.315 09:16:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.315 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:23.315 09:16:57 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.315 09:16:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.315 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:03:23.315 09:16:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:23.316 09:16:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:23.316 09:16:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:23.316 09:16:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.316 09:16:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:23.316 09:16:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.316 09:16:57 -- common/autotest_common.sh@1457 -- # uname 00:03:23.316 09:16:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.316 09:16:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.316 09:16:57 -- common/autotest_common.sh@1477 -- # uname 00:03:23.575 09:16:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.575 09:16:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.575 09:16:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.575 lcov: LCOV version 1.15 00:03:23.575 09:16:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:38.467 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.362 09:17:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.362 09:17:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.362 09:17:26 -- common/autotest_common.sh@10 -- # set +x 00:03:53.362 09:17:26 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.362 09:17:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.362 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:53.362 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:53.362 09:17:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.362 09:17:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.362 09:17:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.362 09:17:27 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:53.362 09:17:27 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:53.362 09:17:27 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:53.362 09:17:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:53.362 09:17:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:53.362 09:17:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.362 09:17:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:53.362 09:17:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:53.362 09:17:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:53.362 09:17:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:53.362 09:17:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:53.362 09:17:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.362 09:17:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:53.362 09:17:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:53.362 09:17:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.362 09:17:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.362 09:17:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.362 09:17:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.362 09:17:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.362 09:17:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.362 09:17:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.362 09:17:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.362 No valid GPT data, bailing 00:03:53.362 09:17:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.362 09:17:27 -- scripts/common.sh@394 -- # pt= 00:03:53.362 09:17:27 -- scripts/common.sh@395 -- # return 1 00:03:53.362 09:17:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.362 1+0 records in 00:03:53.362 1+0 records out 00:03:53.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643065 s, 163 MB/s 00:03:53.362 09:17:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.362 09:17:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.362 09:17:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:53.362 09:17:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:53.362 09:17:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:53.362 No valid GPT data, bailing 00:03:53.362 09:17:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.622 09:17:27 -- scripts/common.sh@394 -- # pt= 00:03:53.622 09:17:27 -- scripts/common.sh@395 -- # return 1 00:03:53.622 09:17:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:53.622 1+0 records in 00:03:53.622 1+0 records out 00:03:53.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053646 s, 195 MB/s 00:03:53.622 09:17:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.622 09:17:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.622 09:17:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:53.622 09:17:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:53.622 09:17:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:53.622 No valid GPT data, bailing 00:03:53.622 09:17:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.622 09:17:27 -- scripts/common.sh@394 -- # pt= 00:03:53.622 09:17:27 -- scripts/common.sh@395 -- # return 1 00:03:53.622 09:17:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:53.622 1+0 records in 00:03:53.622 1+0 records out 00:03:53.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037886 s, 277 MB/s 00:03:53.622 09:17:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.622 09:17:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.622 09:17:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:53.622 09:17:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:53.622 09:17:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:53.622 No valid GPT data, bailing 00:03:53.622 09:17:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.622 09:17:27 -- scripts/common.sh@394 -- # pt= 00:03:53.622 09:17:27 -- scripts/common.sh@395 -- # return 1 00:03:53.622 09:17:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:53.622 1+0 records in 00:03:53.622 1+0 records out 00:03:53.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083443 s, 126 MB/s 00:03:53.622 09:17:27 -- spdk/autotest.sh@105 -- # sync 00:03:53.882 09:17:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.882 09:17:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.882 09:17:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.421 09:17:30 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.421 09:17:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.421 09:17:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.421 09:17:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.356 Hugepages 00:03:57.356 node hugesize free / total 00:03:57.356 node0 1048576kB 0 / 0 00:03:57.356 node0 2048kB 0 / 0 00:03:57.356 00:03:57.356 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.356 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:57.614 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:57.614 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:57.614 09:17:31 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.614 09:17:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.614 09:17:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.614 09:17:31 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.551 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.551 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.809 09:17:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:59.746 09:17:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:59.746 09:17:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:59.746 09:17:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.746 09:17:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:59.746 09:17:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.746 09:17:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.746 09:17:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.746 09:17:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:59.746 09:17:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.746 09:17:33 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:59.746 09:17:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:59.746 09:17:33 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.313 Waiting for block devices as requested 00:04:00.313 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.313 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.571 09:17:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.571 09:17:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:00.571 09:17:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:00.571 09:17:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:00.571 09:17:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:00.571 09:17:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.571 09:17:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:00.571 09:17:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.571 09:17:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.571 09:17:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.571 09:17:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.571 09:17:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.572 09:17:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.572 09:17:34 -- common/autotest_common.sh@1543 -- # continue 00:04:00.572 09:17:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.572 09:17:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.572 09:17:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:00.572 09:17:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:00.572 09:17:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.572 09:17:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.572 09:17:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.572 09:17:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.572 09:17:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.572 09:17:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.572 09:17:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.572 09:17:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.572 09:17:34 -- common/autotest_common.sh@1543 -- # continue 00:04:00.572 09:17:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.572 09:17:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.572 09:17:34 -- common/autotest_common.sh@10 -- # set +x 00:04:00.572 09:17:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.572 09:17:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.572 09:17:34 -- common/autotest_common.sh@10 -- # set +x 00:04:00.572 09:17:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.553 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.553 09:17:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.553 09:17:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.553 09:17:35 -- common/autotest_common.sh@10 -- # set +x 00:04:01.812 09:17:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.812 09:17:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.812 09:17:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.812 09:17:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.812 09:17:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.812 09:17:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.812 09:17:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.812 09:17:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.812 09:17:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.812 09:17:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.812 09:17:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.812 09:17:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.812 09:17:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.812 09:17:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:01.812 09:17:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.812 09:17:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.812 09:17:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:01.812 09:17:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:01.812 09:17:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.812 09:17:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.812 09:17:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:01.812 09:17:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:01.812 09:17:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.812 09:17:35 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:01.812 09:17:35 -- common/autotest_common.sh@1572 -- # return 0 00:04:01.812 09:17:35 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:01.812 09:17:35 -- common/autotest_common.sh@1580 -- # return 0 00:04:01.812 09:17:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:01.812 09:17:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:01.812 09:17:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:01.812 09:17:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:01.812 09:17:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:01.812 09:17:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.812 09:17:35 -- common/autotest_common.sh@10 -- # set +x 00:04:01.812 09:17:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:01.812 09:17:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:01.812 09:17:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.812 09:17:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.812 09:17:35 -- common/autotest_common.sh@10 -- # set +x 00:04:01.812 ************************************ 00:04:01.812 START TEST env 00:04:01.812 ************************************ 00:04:01.812 09:17:35 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.071 * Looking for test storage... 00:04:02.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:02.071 09:17:35 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.071 09:17:35 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.071 09:17:35 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.071 09:17:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.071 09:17:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.071 09:17:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.071 09:17:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.071 09:17:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.071 09:17:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.071 09:17:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.071 09:17:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.071 09:17:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.071 09:17:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.071 09:17:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.071 09:17:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.071 09:17:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.071 09:17:35 env -- scripts/common.sh@345 -- # : 1 00:04:02.071 09:17:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.071 09:17:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.071 09:17:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.071 09:17:35 env -- scripts/common.sh@353 -- # local d=1 00:04:02.071 09:17:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.071 09:17:35 env -- scripts/common.sh@355 -- # echo 1 00:04:02.071 09:17:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.071 09:17:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.072 09:17:35 env -- scripts/common.sh@353 -- # local d=2 00:04:02.072 09:17:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.072 09:17:35 env -- scripts/common.sh@355 -- # echo 2 00:04:02.072 09:17:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.072 09:17:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.072 09:17:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.072 09:17:35 env -- scripts/common.sh@368 -- # return 0 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.072 --rc genhtml_branch_coverage=1 00:04:02.072 --rc genhtml_function_coverage=1 00:04:02.072 --rc genhtml_legend=1 00:04:02.072 --rc geninfo_all_blocks=1 00:04:02.072 --rc geninfo_unexecuted_blocks=1 00:04:02.072 00:04:02.072 ' 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.072 --rc genhtml_branch_coverage=1 00:04:02.072 --rc genhtml_function_coverage=1 00:04:02.072 --rc genhtml_legend=1 00:04:02.072 --rc geninfo_all_blocks=1 00:04:02.072 --rc geninfo_unexecuted_blocks=1 00:04:02.072 00:04:02.072 ' 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.072 --rc genhtml_branch_coverage=1 00:04:02.072 --rc genhtml_function_coverage=1 00:04:02.072 --rc genhtml_legend=1 00:04:02.072 --rc geninfo_all_blocks=1 00:04:02.072 --rc geninfo_unexecuted_blocks=1 00:04:02.072 00:04:02.072 ' 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.072 --rc genhtml_branch_coverage=1 00:04:02.072 --rc genhtml_function_coverage=1 00:04:02.072 --rc genhtml_legend=1 00:04:02.072 --rc geninfo_all_blocks=1 00:04:02.072 --rc geninfo_unexecuted_blocks=1 00:04:02.072 00:04:02.072 ' 00:04:02.072 09:17:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.072 09:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.072 09:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.072 ************************************ 00:04:02.072 START TEST env_memory 00:04:02.072 ************************************ 00:04:02.072 09:17:35 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.072 00:04:02.072 00:04:02.072 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.072 http://cunit.sourceforge.net/ 00:04:02.072 00:04:02.072 00:04:02.072 Suite: memory 00:04:02.072 Test: alloc and free memory map ...[2024-12-12 09:17:36.071453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.331 passed 00:04:02.331 Test: mem map translation ...[2024-12-12 09:17:36.115426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.331 [2024-12-12 09:17:36.115523] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.331 [2024-12-12 09:17:36.115611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.331 [2024-12-12 09:17:36.115643] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.331 passed 00:04:02.331 Test: mem map registration ...[2024-12-12 09:17:36.181855] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.331 [2024-12-12 09:17:36.181938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.331 passed 00:04:02.331 Test: mem map adjacent registrations ...passed 00:04:02.331 00:04:02.331 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.331 suites 1 1 n/a 0 0 00:04:02.331 tests 4 4 4 0 0 00:04:02.331 asserts 152 152 152 0 n/a 00:04:02.331 00:04:02.331 Elapsed time = 0.247 seconds 00:04:02.331 00:04:02.331 real 0m0.308s 00:04:02.331 user 0m0.271s 00:04:02.331 sys 0m0.027s 00:04:02.331 09:17:36 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.331 09:17:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.331 ************************************ 00:04:02.331 END TEST env_memory 00:04:02.331 ************************************ 00:04:02.331 09:17:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.331 09:17:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.331 09:17:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.331 09:17:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.590 ************************************ 00:04:02.591 START TEST env_vtophys 00:04:02.591 ************************************ 00:04:02.591 09:17:36 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.591 EAL: lib.eal log level changed from notice to debug 00:04:02.591 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 1 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 2 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 3 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 4 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 5 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 6 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 7 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 8 as core 0 on socket 0 00:04:02.591 EAL: Detected lcore 9 as core 0 on socket 0 00:04:02.591 EAL: Maximum logical cores by configuration: 128 00:04:02.591 EAL: Detected CPU lcores: 10 00:04:02.591 EAL: Detected NUMA nodes: 1 00:04:02.591 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.591 EAL: Detected shared linkage of DPDK 00:04:02.591 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.591 EAL: Selected IOVA mode 'PA' 00:04:02.591 EAL: Probing VFIO support... 00:04:02.591 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.591 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:02.591 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.591 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.591 EAL: Setting up physically contiguous memory... 00:04:02.591 EAL: Setting maximum number of open files to 524288 00:04:02.591 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.591 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.591 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.591 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.591 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.591 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.591 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.591 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.591 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.591 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.591 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.591 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.591 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.591 EAL: Hugepages will be freed exactly as allocated. 00:04:02.591 EAL: No shared files mode enabled, IPC is disabled 00:04:02.591 EAL: No shared files mode enabled, IPC is disabled 00:04:02.591 EAL: TSC frequency is ~2290000 KHz 00:04:02.591 EAL: Main lcore 0 is ready (tid=7f61ca71ca40;cpuset=[0]) 00:04:02.591 EAL: Trying to obtain current memory policy. 00:04:02.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.591 EAL: Restoring previous memory policy: 0 00:04:02.591 EAL: request: mp_malloc_sync 00:04:02.591 EAL: No shared files mode enabled, IPC is disabled 00:04:02.591 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.591 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.591 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.591 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.591 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:02.591 00:04:02.591 00:04:02.591 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.591 http://cunit.sourceforge.net/ 00:04:02.591 00:04:02.591 00:04:02.591 Suite: components_suite 00:04:03.159 Test: vtophys_malloc_test ...passed 00:04:03.159 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.159 EAL: Restoring previous memory policy: 4 00:04:03.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.159 EAL: request: mp_malloc_sync 00:04:03.159 EAL: No shared files mode enabled, IPC is disabled 00:04:03.159 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.159 EAL: request: mp_malloc_sync 00:04:03.159 EAL: No shared files mode enabled, IPC is disabled 00:04:03.159 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.159 EAL: Trying to obtain current memory policy. 00:04:03.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.159 EAL: Restoring previous memory policy: 4 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.160 EAL: Trying to obtain current memory policy. 00:04:03.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.160 EAL: Restoring previous memory policy: 4 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.160 EAL: Trying to obtain current memory policy. 00:04:03.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.160 EAL: Restoring previous memory policy: 4 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.160 EAL: Trying to obtain current memory policy. 00:04:03.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.160 EAL: Restoring previous memory policy: 4 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.160 EAL: request: mp_malloc_sync 00:04:03.160 EAL: No shared files mode enabled, IPC is disabled 00:04:03.160 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.419 EAL: Trying to obtain current memory policy. 00:04:03.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.419 EAL: Restoring previous memory policy: 4 00:04:03.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.419 EAL: request: mp_malloc_sync 00:04:03.419 EAL: No shared files mode enabled, IPC is disabled 00:04:03.419 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.419 EAL: request: mp_malloc_sync 00:04:03.419 EAL: No shared files mode enabled, IPC is disabled 00:04:03.419 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.678 EAL: Trying to obtain current memory policy. 00:04:03.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.678 EAL: Restoring previous memory policy: 4 00:04:03.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.678 EAL: request: mp_malloc_sync 00:04:03.678 EAL: No shared files mode enabled, IPC is disabled 00:04:03.678 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.937 EAL: request: mp_malloc_sync 00:04:03.937 EAL: No shared files mode enabled, IPC is disabled 00:04:03.937 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.195 EAL: Trying to obtain current memory policy. 00:04:04.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.195 EAL: Restoring previous memory policy: 4 00:04:04.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.195 EAL: request: mp_malloc_sync 00:04:04.195 EAL: No shared files mode enabled, IPC is disabled 00:04:04.195 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.762 EAL: request: mp_malloc_sync 00:04:04.762 EAL: No shared files mode enabled, IPC is disabled 00:04:04.762 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.021 EAL: Trying to obtain current memory policy. 00:04:05.021 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.279 EAL: Restoring previous memory policy: 4 00:04:05.279 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.279 EAL: request: mp_malloc_sync 00:04:05.279 EAL: No shared files mode enabled, IPC is disabled 00:04:05.279 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.215 EAL: request: mp_malloc_sync 00:04:06.215 EAL: No shared files mode enabled, IPC is disabled 00:04:06.215 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.166 EAL: Trying to obtain current memory policy. 00:04:07.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.166 EAL: Restoring previous memory policy: 4 00:04:07.166 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.166 EAL: request: mp_malloc_sync 00:04:07.166 EAL: No shared files mode enabled, IPC is disabled 00:04:07.166 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.638 EAL: request: mp_malloc_sync 00:04:09.638 EAL: No shared files mode enabled, IPC is disabled 00:04:09.638 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:11.540 passed 00:04:11.540 00:04:11.540 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.540 suites 1 1 n/a 0 0 00:04:11.540 tests 2 2 2 0 0 00:04:11.540 asserts 5768 5768 5768 0 n/a 00:04:11.540 00:04:11.540 Elapsed time = 8.405 seconds 00:04:11.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.540 EAL: request: mp_malloc_sync 00:04:11.540 EAL: No shared files mode enabled, IPC is disabled 00:04:11.540 EAL: Heap on socket 0 was shrunk by 2MB 00:04:11.540 EAL: No shared files mode enabled, IPC is disabled 00:04:11.540 EAL: No shared files mode enabled, IPC is disabled 00:04:11.540 EAL: No shared files mode enabled, IPC is disabled 00:04:11.540 00:04:11.540 real 0m8.749s 00:04:11.540 user 0m7.696s 00:04:11.540 sys 0m0.889s 00:04:11.540 09:17:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.540 09:17:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 ************************************ 00:04:11.540 END TEST env_vtophys 00:04:11.540 ************************************ 00:04:11.540 09:17:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:11.540 09:17:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.540 09:17:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.540 09:17:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 ************************************ 00:04:11.540 START TEST env_pci 00:04:11.540 ************************************ 00:04:11.540 09:17:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:11.540 00:04:11.540 00:04:11.540 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.540 http://cunit.sourceforge.net/ 00:04:11.540 00:04:11.540 00:04:11.540 Suite: pci 00:04:11.540 Test: pci_hook ...[2024-12-12 09:17:45.228376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57930 has claimed it 00:04:11.540 passed 00:04:11.540 00:04:11.540 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.540 suites 1 1 n/a 0 0 00:04:11.540 tests 1 1 1 0 0 00:04:11.540 asserts 25 25 25 0 n/a 00:04:11.540 00:04:11.540 Elapsed time = 0.008 seconds 00:04:11.540 EAL: Cannot find device (10000:00:01.0) 00:04:11.540 EAL: Failed to attach device on primary process 00:04:11.540 00:04:11.540 real 0m0.110s 00:04:11.540 user 0m0.045s 00:04:11.540 sys 0m0.063s 00:04:11.540 09:17:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.540 09:17:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 ************************************ 00:04:11.540 END TEST env_pci 00:04:11.540 ************************************ 00:04:11.540 09:17:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:11.540 09:17:45 env -- env/env.sh@15 -- # uname 00:04:11.540 09:17:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:11.540 09:17:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:11.540 09:17:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.540 09:17:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:11.540 09:17:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.540 09:17:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 ************************************ 00:04:11.540 START TEST env_dpdk_post_init 00:04:11.540 ************************************ 00:04:11.540 09:17:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.540 EAL: Detected CPU lcores: 10 00:04:11.540 EAL: Detected NUMA nodes: 1 00:04:11.540 EAL: Detected shared linkage of DPDK 00:04:11.540 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.540 EAL: Selected IOVA mode 'PA' 00:04:11.799 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.799 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:11.799 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:11.799 Starting DPDK initialization... 00:04:11.799 Starting SPDK post initialization... 00:04:11.799 SPDK NVMe probe 00:04:11.799 Attaching to 0000:00:10.0 00:04:11.799 Attaching to 0000:00:11.0 00:04:11.799 Attached to 0000:00:10.0 00:04:11.799 Attached to 0000:00:11.0 00:04:11.799 Cleaning up... 00:04:11.799 00:04:11.799 real 0m0.315s 00:04:11.799 user 0m0.103s 00:04:11.799 sys 0m0.112s 00:04:11.799 09:17:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.799 09:17:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.799 ************************************ 00:04:11.799 END TEST env_dpdk_post_init 00:04:11.799 ************************************ 00:04:11.799 09:17:45 env -- env/env.sh@26 -- # uname 00:04:11.799 09:17:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.799 09:17:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.799 09:17:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.799 09:17:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.799 09:17:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.799 ************************************ 00:04:11.799 START TEST env_mem_callbacks 00:04:11.799 ************************************ 00:04:11.799 09:17:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.799 EAL: Detected CPU lcores: 10 00:04:11.799 EAL: Detected NUMA nodes: 1 00:04:11.799 EAL: Detected shared linkage of DPDK 00:04:11.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.058 EAL: Selected IOVA mode 'PA' 00:04:12.058 00:04:12.058 00:04:12.058 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.058 http://cunit.sourceforge.net/ 00:04:12.058 00:04:12.058 00:04:12.058 Suite: memory 00:04:12.058 Test: test ... 00:04:12.058 register 0x200000200000 2097152 00:04:12.058 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.058 malloc 3145728 00:04:12.058 register 0x200000400000 4194304 00:04:12.058 buf 0x2000004fffc0 len 3145728 PASSED 00:04:12.058 malloc 64 00:04:12.058 buf 0x2000004ffec0 len 64 PASSED 00:04:12.058 malloc 4194304 00:04:12.058 register 0x200000800000 6291456 00:04:12.058 buf 0x2000009fffc0 len 4194304 PASSED 00:04:12.058 free 0x2000004fffc0 3145728 00:04:12.058 free 0x2000004ffec0 64 00:04:12.058 unregister 0x200000400000 4194304 PASSED 00:04:12.058 free 0x2000009fffc0 4194304 00:04:12.058 unregister 0x200000800000 6291456 PASSED 00:04:12.058 malloc 8388608 00:04:12.058 register 0x200000400000 10485760 00:04:12.058 buf 0x2000005fffc0 len 8388608 PASSED 00:04:12.058 free 0x2000005fffc0 8388608 00:04:12.058 unregister 0x200000400000 10485760 PASSED 00:04:12.058 passed 00:04:12.058 00:04:12.058 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.058 suites 1 1 n/a 0 0 00:04:12.058 tests 1 1 1 0 0 00:04:12.058 asserts 15 15 15 0 n/a 00:04:12.058 00:04:12.058 Elapsed time = 0.081 seconds 00:04:12.058 00:04:12.058 real 0m0.279s 00:04:12.058 user 0m0.116s 00:04:12.058 sys 0m0.061s 00:04:12.058 09:17:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.058 09:17:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.058 ************************************ 00:04:12.058 END TEST env_mem_callbacks 00:04:12.058 ************************************ 00:04:12.316 00:04:12.316 real 0m10.332s 00:04:12.316 user 0m8.457s 00:04:12.316 sys 0m1.529s 00:04:12.316 09:17:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.316 09:17:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.316 ************************************ 00:04:12.316 END TEST env 00:04:12.316 ************************************ 00:04:12.316 09:17:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.316 09:17:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.316 09:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.316 09:17:46 -- common/autotest_common.sh@10 -- # set +x 00:04:12.316 ************************************ 00:04:12.316 START TEST rpc 00:04:12.316 ************************************ 00:04:12.316 09:17:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.316 * Looking for test storage... 00:04:12.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.316 09:17:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.316 09:17:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.316 09:17:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.575 09:17:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.576 09:17:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.576 09:17:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.576 09:17:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.576 09:17:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.576 09:17:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.576 09:17:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.576 09:17:46 rpc -- scripts/common.sh@345 -- # : 1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.576 09:17:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.576 09:17:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.576 09:17:46 rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.576 09:17:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.576 09:17:46 rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.576 09:17:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.576 09:17:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.576 09:17:46 rpc -- scripts/common.sh@368 -- # return 0 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.576 --rc genhtml_branch_coverage=1 00:04:12.576 --rc genhtml_function_coverage=1 00:04:12.576 --rc genhtml_legend=1 00:04:12.576 --rc geninfo_all_blocks=1 00:04:12.576 --rc geninfo_unexecuted_blocks=1 00:04:12.576 00:04:12.576 ' 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.576 --rc genhtml_branch_coverage=1 00:04:12.576 --rc genhtml_function_coverage=1 00:04:12.576 --rc genhtml_legend=1 00:04:12.576 --rc geninfo_all_blocks=1 00:04:12.576 --rc geninfo_unexecuted_blocks=1 00:04:12.576 00:04:12.576 ' 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.576 --rc genhtml_branch_coverage=1 00:04:12.576 --rc genhtml_function_coverage=1 00:04:12.576 --rc genhtml_legend=1 00:04:12.576 --rc geninfo_all_blocks=1 00:04:12.576 --rc geninfo_unexecuted_blocks=1 00:04:12.576 00:04:12.576 ' 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.576 --rc genhtml_branch_coverage=1 00:04:12.576 --rc genhtml_function_coverage=1 00:04:12.576 --rc genhtml_legend=1 00:04:12.576 --rc geninfo_all_blocks=1 00:04:12.576 --rc geninfo_unexecuted_blocks=1 00:04:12.576 00:04:12.576 ' 00:04:12.576 09:17:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58057 00:04:12.576 09:17:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:12.576 09:17:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.576 09:17:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58057 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 58057 ']' 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.576 09:17:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.576 [2024-12-12 09:17:46.470574] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:12.576 [2024-12-12 09:17:46.470721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58057 ] 00:04:12.835 [2024-12-12 09:17:46.643034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.835 [2024-12-12 09:17:46.754236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.835 [2024-12-12 09:17:46.754292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58057' to capture a snapshot of events at runtime. 00:04:12.835 [2024-12-12 09:17:46.754302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.835 [2024-12-12 09:17:46.754314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.835 [2024-12-12 09:17:46.754322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58057 for offline analysis/debug. 00:04:12.835 [2024-12-12 09:17:46.755482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.772 09:17:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.772 09:17:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.772 09:17:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.772 09:17:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.772 09:17:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.772 09:17:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.772 09:17:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.772 09:17:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.772 09:17:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 ************************************ 00:04:13.772 START TEST rpc_integrity 00:04:13.772 ************************************ 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.772 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.772 { 00:04:13.772 "name": "Malloc0", 00:04:13.772 "aliases": [ 00:04:13.772 "62c6b6c6-f84d-4561-a07c-48445df1d80e" 00:04:13.772 ], 00:04:13.772 "product_name": "Malloc disk", 00:04:13.772 "block_size": 512, 00:04:13.772 "num_blocks": 16384, 00:04:13.773 "uuid": "62c6b6c6-f84d-4561-a07c-48445df1d80e", 00:04:13.773 "assigned_rate_limits": { 00:04:13.773 "rw_ios_per_sec": 0, 00:04:13.773 "rw_mbytes_per_sec": 0, 00:04:13.773 "r_mbytes_per_sec": 0, 00:04:13.773 "w_mbytes_per_sec": 0 00:04:13.773 }, 00:04:13.773 "claimed": false, 00:04:13.773 "zoned": false, 00:04:13.773 "supported_io_types": { 00:04:13.773 "read": true, 00:04:13.773 "write": true, 00:04:13.773 "unmap": true, 00:04:13.773 "flush": true, 00:04:13.773 "reset": true, 00:04:13.773 "nvme_admin": false, 00:04:13.773 "nvme_io": false, 00:04:13.773 "nvme_io_md": false, 00:04:13.773 "write_zeroes": true, 00:04:13.773 "zcopy": true, 00:04:13.773 "get_zone_info": false, 00:04:13.773 "zone_management": false, 00:04:13.773 "zone_append": false, 00:04:13.773 "compare": false, 00:04:13.773 "compare_and_write": false, 00:04:13.773 "abort": true, 00:04:13.773 "seek_hole": false, 00:04:13.773 "seek_data": false, 00:04:13.773 "copy": true, 00:04:13.773 "nvme_iov_md": false 00:04:13.773 }, 00:04:13.773 "memory_domains": [ 00:04:13.773 { 00:04:13.773 "dma_device_id": "system", 00:04:13.773 "dma_device_type": 1 00:04:13.773 }, 00:04:13.773 { 00:04:13.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.773 "dma_device_type": 2 00:04:13.773 } 00:04:13.773 ], 00:04:13.773 "driver_specific": {} 00:04:13.773 } 00:04:13.773 ]' 00:04:13.773 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.773 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.773 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 [2024-12-12 09:17:47.755134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.773 [2024-12-12 09:17:47.755193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.773 [2024-12-12 09:17:47.755222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:13.773 [2024-12-12 09:17:47.755238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.773 [2024-12-12 09:17:47.757468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.773 [2024-12-12 09:17:47.757511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.773 Passthru0 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.773 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.773 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.773 { 00:04:13.773 "name": "Malloc0", 00:04:13.773 "aliases": [ 00:04:13.773 "62c6b6c6-f84d-4561-a07c-48445df1d80e" 00:04:13.773 ], 00:04:13.773 "product_name": "Malloc disk", 00:04:13.773 "block_size": 512, 00:04:13.773 "num_blocks": 16384, 00:04:13.773 "uuid": "62c6b6c6-f84d-4561-a07c-48445df1d80e", 00:04:13.773 "assigned_rate_limits": { 00:04:13.773 "rw_ios_per_sec": 0, 00:04:13.773 "rw_mbytes_per_sec": 0, 00:04:13.773 "r_mbytes_per_sec": 0, 00:04:13.773 "w_mbytes_per_sec": 0 00:04:13.773 }, 00:04:13.773 "claimed": true, 00:04:13.773 "claim_type": "exclusive_write", 00:04:13.773 "zoned": false, 00:04:13.773 "supported_io_types": { 00:04:13.773 "read": true, 00:04:13.773 "write": true, 00:04:13.773 "unmap": true, 00:04:13.773 "flush": true, 00:04:13.773 "reset": true, 00:04:13.773 "nvme_admin": false, 00:04:13.773 "nvme_io": false, 00:04:13.773 "nvme_io_md": false, 00:04:13.773 "write_zeroes": true, 00:04:13.773 "zcopy": true, 00:04:13.773 "get_zone_info": false, 00:04:13.773 "zone_management": false, 00:04:13.773 "zone_append": false, 00:04:13.773 "compare": false, 00:04:13.773 "compare_and_write": false, 00:04:13.773 "abort": true, 00:04:13.773 "seek_hole": false, 00:04:13.773 "seek_data": false, 00:04:13.773 "copy": true, 00:04:13.773 "nvme_iov_md": false 00:04:13.773 }, 00:04:13.773 "memory_domains": [ 00:04:13.773 { 00:04:13.773 "dma_device_id": "system", 00:04:13.773 "dma_device_type": 1 00:04:13.773 }, 00:04:13.773 { 00:04:13.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.773 "dma_device_type": 2 00:04:13.773 } 00:04:13.773 ], 00:04:13.773 "driver_specific": {} 00:04:13.773 }, 00:04:13.773 { 00:04:13.773 "name": "Passthru0", 00:04:13.773 "aliases": [ 00:04:13.773 "e71e0264-d3de-5720-8a74-981593bcbbee" 00:04:13.773 ], 00:04:13.773 "product_name": "passthru", 00:04:13.773 "block_size": 512, 00:04:13.773 "num_blocks": 16384, 00:04:13.773 "uuid": "e71e0264-d3de-5720-8a74-981593bcbbee", 00:04:13.773 "assigned_rate_limits": { 00:04:13.773 "rw_ios_per_sec": 0, 00:04:13.773 "rw_mbytes_per_sec": 0, 00:04:13.773 "r_mbytes_per_sec": 0, 00:04:13.773 "w_mbytes_per_sec": 0 00:04:13.773 }, 00:04:13.773 "claimed": false, 00:04:13.773 "zoned": false, 00:04:13.773 "supported_io_types": { 00:04:13.773 "read": true, 00:04:13.773 "write": true, 00:04:13.773 "unmap": true, 00:04:13.773 "flush": true, 00:04:13.773 "reset": true, 00:04:13.773 "nvme_admin": false, 00:04:13.773 "nvme_io": false, 00:04:13.773 "nvme_io_md": false, 00:04:13.773 "write_zeroes": true, 00:04:13.773 "zcopy": true, 00:04:13.773 "get_zone_info": false, 00:04:13.773 "zone_management": false, 00:04:13.773 "zone_append": false, 00:04:13.773 "compare": false, 00:04:13.773 "compare_and_write": false, 00:04:13.773 "abort": true, 00:04:13.773 "seek_hole": false, 00:04:13.773 "seek_data": false, 00:04:13.773 "copy": true, 00:04:13.773 "nvme_iov_md": false 00:04:13.773 }, 00:04:13.773 "memory_domains": [ 00:04:13.773 { 00:04:13.773 "dma_device_id": "system", 00:04:13.773 "dma_device_type": 1 00:04:13.773 }, 00:04:13.773 { 00:04:13.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.773 "dma_device_type": 2 00:04:13.773 } 00:04:13.773 ], 00:04:13.773 "driver_specific": { 00:04:13.773 "passthru": { 00:04:13.773 "name": "Passthru0", 00:04:13.773 "base_bdev_name": "Malloc0" 00:04:13.773 } 00:04:13.773 } 00:04:13.773 } 00:04:13.773 ]' 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.035 09:17:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.035 00:04:14.035 real 0m0.338s 00:04:14.035 user 0m0.191s 00:04:14.035 sys 0m0.042s 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.035 09:17:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 END TEST rpc_integrity 00:04:14.035 ************************************ 00:04:14.035 09:17:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.035 09:17:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.035 09:17:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.035 09:17:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 START TEST rpc_plugins 00:04:14.035 ************************************ 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:14.035 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.035 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.035 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.035 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.035 { 00:04:14.035 "name": "Malloc1", 00:04:14.035 "aliases": [ 00:04:14.035 "b8dfc720-3725-4538-8bb4-1e190edfe377" 00:04:14.035 ], 00:04:14.035 "product_name": "Malloc disk", 00:04:14.035 "block_size": 4096, 00:04:14.035 "num_blocks": 256, 00:04:14.035 "uuid": "b8dfc720-3725-4538-8bb4-1e190edfe377", 00:04:14.035 "assigned_rate_limits": { 00:04:14.035 "rw_ios_per_sec": 0, 00:04:14.035 "rw_mbytes_per_sec": 0, 00:04:14.035 "r_mbytes_per_sec": 0, 00:04:14.035 "w_mbytes_per_sec": 0 00:04:14.035 }, 00:04:14.035 "claimed": false, 00:04:14.035 "zoned": false, 00:04:14.035 "supported_io_types": { 00:04:14.035 "read": true, 00:04:14.035 "write": true, 00:04:14.035 "unmap": true, 00:04:14.035 "flush": true, 00:04:14.035 "reset": true, 00:04:14.035 "nvme_admin": false, 00:04:14.035 "nvme_io": false, 00:04:14.035 "nvme_io_md": false, 00:04:14.035 "write_zeroes": true, 00:04:14.035 "zcopy": true, 00:04:14.035 "get_zone_info": false, 00:04:14.035 "zone_management": false, 00:04:14.035 "zone_append": false, 00:04:14.035 "compare": false, 00:04:14.035 "compare_and_write": false, 00:04:14.035 "abort": true, 00:04:14.035 "seek_hole": false, 00:04:14.035 "seek_data": false, 00:04:14.035 "copy": true, 00:04:14.035 "nvme_iov_md": false 00:04:14.035 }, 00:04:14.035 "memory_domains": [ 00:04:14.035 { 00:04:14.035 "dma_device_id": "system", 00:04:14.035 "dma_device_type": 1 00:04:14.035 }, 00:04:14.035 { 00:04:14.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.035 "dma_device_type": 2 00:04:14.035 } 00:04:14.035 ], 00:04:14.035 "driver_specific": {} 00:04:14.035 } 00:04:14.035 ]' 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.297 09:17:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.297 00:04:14.297 real 0m0.166s 00:04:14.297 user 0m0.090s 00:04:14.297 sys 0m0.030s 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.297 09:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 ************************************ 00:04:14.297 END TEST rpc_plugins 00:04:14.297 ************************************ 00:04:14.297 09:17:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.297 09:17:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.297 09:17:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.297 09:17:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 ************************************ 00:04:14.297 START TEST rpc_trace_cmd_test 00:04:14.297 ************************************ 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.297 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58057", 00:04:14.297 "tpoint_group_mask": "0x8", 00:04:14.297 "iscsi_conn": { 00:04:14.297 "mask": "0x2", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "scsi": { 00:04:14.297 "mask": "0x4", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "bdev": { 00:04:14.297 "mask": "0x8", 00:04:14.297 "tpoint_mask": "0xffffffffffffffff" 00:04:14.297 }, 00:04:14.297 "nvmf_rdma": { 00:04:14.297 "mask": "0x10", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "nvmf_tcp": { 00:04:14.297 "mask": "0x20", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "ftl": { 00:04:14.297 "mask": "0x40", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "blobfs": { 00:04:14.297 "mask": "0x80", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "dsa": { 00:04:14.297 "mask": "0x200", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "thread": { 00:04:14.297 "mask": "0x400", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "nvme_pcie": { 00:04:14.297 "mask": "0x800", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "iaa": { 00:04:14.297 "mask": "0x1000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "nvme_tcp": { 00:04:14.297 "mask": "0x2000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "bdev_nvme": { 00:04:14.297 "mask": "0x4000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "sock": { 00:04:14.297 "mask": "0x8000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "blob": { 00:04:14.297 "mask": "0x10000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "bdev_raid": { 00:04:14.297 "mask": "0x20000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 }, 00:04:14.297 "scheduler": { 00:04:14.297 "mask": "0x40000", 00:04:14.297 "tpoint_mask": "0x0" 00:04:14.297 } 00:04:14.297 }' 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.297 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.556 ************************************ 00:04:14.556 END TEST rpc_trace_cmd_test 00:04:14.556 ************************************ 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.556 00:04:14.556 real 0m0.261s 00:04:14.556 user 0m0.213s 00:04:14.556 sys 0m0.037s 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.556 09:17:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.556 09:17:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.556 09:17:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.556 09:17:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.556 09:17:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.556 09:17:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.556 09:17:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.556 ************************************ 00:04:14.556 START TEST rpc_daemon_integrity 00:04:14.556 ************************************ 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.556 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.815 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.816 { 00:04:14.816 "name": "Malloc2", 00:04:14.816 "aliases": [ 00:04:14.816 "be9a9645-aabf-4c85-8f39-7281da766e11" 00:04:14.816 ], 00:04:14.816 "product_name": "Malloc disk", 00:04:14.816 "block_size": 512, 00:04:14.816 "num_blocks": 16384, 00:04:14.816 "uuid": "be9a9645-aabf-4c85-8f39-7281da766e11", 00:04:14.816 "assigned_rate_limits": { 00:04:14.816 "rw_ios_per_sec": 0, 00:04:14.816 "rw_mbytes_per_sec": 0, 00:04:14.816 "r_mbytes_per_sec": 0, 00:04:14.816 "w_mbytes_per_sec": 0 00:04:14.816 }, 00:04:14.816 "claimed": false, 00:04:14.816 "zoned": false, 00:04:14.816 "supported_io_types": { 00:04:14.816 "read": true, 00:04:14.816 "write": true, 00:04:14.816 "unmap": true, 00:04:14.816 "flush": true, 00:04:14.816 "reset": true, 00:04:14.816 "nvme_admin": false, 00:04:14.816 "nvme_io": false, 00:04:14.816 "nvme_io_md": false, 00:04:14.816 "write_zeroes": true, 00:04:14.816 "zcopy": true, 00:04:14.816 "get_zone_info": false, 00:04:14.816 "zone_management": false, 00:04:14.816 "zone_append": false, 00:04:14.816 "compare": false, 00:04:14.816 "compare_and_write": false, 00:04:14.816 "abort": true, 00:04:14.816 "seek_hole": false, 00:04:14.816 "seek_data": false, 00:04:14.816 "copy": true, 00:04:14.816 "nvme_iov_md": false 00:04:14.816 }, 00:04:14.816 "memory_domains": [ 00:04:14.816 { 00:04:14.816 "dma_device_id": "system", 00:04:14.816 "dma_device_type": 1 00:04:14.816 }, 00:04:14.816 { 00:04:14.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.816 "dma_device_type": 2 00:04:14.816 } 00:04:14.816 ], 00:04:14.816 "driver_specific": {} 00:04:14.816 } 00:04:14.816 ]' 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.816 [2024-12-12 09:17:48.725821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.816 [2024-12-12 09:17:48.725882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.816 [2024-12-12 09:17:48.725904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:14.816 [2024-12-12 09:17:48.725918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.816 [2024-12-12 09:17:48.728085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.816 [2024-12-12 09:17:48.728127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.816 Passthru0 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.816 { 00:04:14.816 "name": "Malloc2", 00:04:14.816 "aliases": [ 00:04:14.816 "be9a9645-aabf-4c85-8f39-7281da766e11" 00:04:14.816 ], 00:04:14.816 "product_name": "Malloc disk", 00:04:14.816 "block_size": 512, 00:04:14.816 "num_blocks": 16384, 00:04:14.816 "uuid": "be9a9645-aabf-4c85-8f39-7281da766e11", 00:04:14.816 "assigned_rate_limits": { 00:04:14.816 "rw_ios_per_sec": 0, 00:04:14.816 "rw_mbytes_per_sec": 0, 00:04:14.816 "r_mbytes_per_sec": 0, 00:04:14.816 "w_mbytes_per_sec": 0 00:04:14.816 }, 00:04:14.816 "claimed": true, 00:04:14.816 "claim_type": "exclusive_write", 00:04:14.816 "zoned": false, 00:04:14.816 "supported_io_types": { 00:04:14.816 "read": true, 00:04:14.816 "write": true, 00:04:14.816 "unmap": true, 00:04:14.816 "flush": true, 00:04:14.816 "reset": true, 00:04:14.816 "nvme_admin": false, 00:04:14.816 "nvme_io": false, 00:04:14.816 "nvme_io_md": false, 00:04:14.816 "write_zeroes": true, 00:04:14.816 "zcopy": true, 00:04:14.816 "get_zone_info": false, 00:04:14.816 "zone_management": false, 00:04:14.816 "zone_append": false, 00:04:14.816 "compare": false, 00:04:14.816 "compare_and_write": false, 00:04:14.816 "abort": true, 00:04:14.816 "seek_hole": false, 00:04:14.816 "seek_data": false, 00:04:14.816 "copy": true, 00:04:14.816 "nvme_iov_md": false 00:04:14.816 }, 00:04:14.816 "memory_domains": [ 00:04:14.816 { 00:04:14.816 "dma_device_id": "system", 00:04:14.816 "dma_device_type": 1 00:04:14.816 }, 00:04:14.816 { 00:04:14.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.816 "dma_device_type": 2 00:04:14.816 } 00:04:14.816 ], 00:04:14.816 "driver_specific": {} 00:04:14.816 }, 00:04:14.816 { 00:04:14.816 "name": "Passthru0", 00:04:14.816 "aliases": [ 00:04:14.816 "701a6cc6-5476-53ae-87bf-666f1301183e" 00:04:14.816 ], 00:04:14.816 "product_name": "passthru", 00:04:14.816 "block_size": 512, 00:04:14.816 "num_blocks": 16384, 00:04:14.816 "uuid": "701a6cc6-5476-53ae-87bf-666f1301183e", 00:04:14.816 "assigned_rate_limits": { 00:04:14.816 "rw_ios_per_sec": 0, 00:04:14.816 "rw_mbytes_per_sec": 0, 00:04:14.816 "r_mbytes_per_sec": 0, 00:04:14.816 "w_mbytes_per_sec": 0 00:04:14.816 }, 00:04:14.816 "claimed": false, 00:04:14.816 "zoned": false, 00:04:14.816 "supported_io_types": { 00:04:14.816 "read": true, 00:04:14.816 "write": true, 00:04:14.816 "unmap": true, 00:04:14.816 "flush": true, 00:04:14.816 "reset": true, 00:04:14.816 "nvme_admin": false, 00:04:14.816 "nvme_io": false, 00:04:14.816 "nvme_io_md": false, 00:04:14.816 "write_zeroes": true, 00:04:14.816 "zcopy": true, 00:04:14.816 "get_zone_info": false, 00:04:14.816 "zone_management": false, 00:04:14.816 "zone_append": false, 00:04:14.816 "compare": false, 00:04:14.816 "compare_and_write": false, 00:04:14.816 "abort": true, 00:04:14.816 "seek_hole": false, 00:04:14.816 "seek_data": false, 00:04:14.816 "copy": true, 00:04:14.816 "nvme_iov_md": false 00:04:14.816 }, 00:04:14.816 "memory_domains": [ 00:04:14.816 { 00:04:14.816 "dma_device_id": "system", 00:04:14.816 "dma_device_type": 1 00:04:14.816 }, 00:04:14.816 { 00:04:14.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.816 "dma_device_type": 2 00:04:14.816 } 00:04:14.816 ], 00:04:14.816 "driver_specific": { 00:04:14.816 "passthru": { 00:04:14.816 "name": "Passthru0", 00:04:14.816 "base_bdev_name": "Malloc2" 00:04:14.816 } 00:04:14.816 } 00:04:14.816 } 00:04:14.816 ]' 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.816 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.075 00:04:15.075 real 0m0.347s 00:04:15.075 user 0m0.204s 00:04:15.075 sys 0m0.045s 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.075 09:17:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 ************************************ 00:04:15.075 END TEST rpc_daemon_integrity 00:04:15.075 ************************************ 00:04:15.075 09:17:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.075 09:17:48 rpc -- rpc/rpc.sh@84 -- # killprocess 58057 00:04:15.075 09:17:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 58057 ']' 00:04:15.075 09:17:48 rpc -- common/autotest_common.sh@958 -- # kill -0 58057 00:04:15.075 09:17:48 rpc -- common/autotest_common.sh@959 -- # uname 00:04:15.075 09:17:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.075 09:17:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58057 00:04:15.075 09:17:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.075 09:17:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.076 09:17:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58057' 00:04:15.076 killing process with pid 58057 00:04:15.076 09:17:49 rpc -- common/autotest_common.sh@973 -- # kill 58057 00:04:15.076 09:17:49 rpc -- common/autotest_common.sh@978 -- # wait 58057 00:04:17.610 ************************************ 00:04:17.610 END TEST rpc 00:04:17.610 ************************************ 00:04:17.610 00:04:17.610 real 0m5.205s 00:04:17.610 user 0m5.766s 00:04:17.610 sys 0m0.904s 00:04:17.610 09:17:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.610 09:17:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.610 09:17:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.610 09:17:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.610 09:17:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.610 09:17:51 -- common/autotest_common.sh@10 -- # set +x 00:04:17.610 ************************************ 00:04:17.610 START TEST skip_rpc 00:04:17.610 ************************************ 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.610 * Looking for test storage... 00:04:17.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.610 09:17:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.610 --rc genhtml_branch_coverage=1 00:04:17.610 --rc genhtml_function_coverage=1 00:04:17.610 --rc genhtml_legend=1 00:04:17.610 --rc geninfo_all_blocks=1 00:04:17.610 --rc geninfo_unexecuted_blocks=1 00:04:17.610 00:04:17.610 ' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.610 --rc genhtml_branch_coverage=1 00:04:17.610 --rc genhtml_function_coverage=1 00:04:17.610 --rc genhtml_legend=1 00:04:17.610 --rc geninfo_all_blocks=1 00:04:17.610 --rc geninfo_unexecuted_blocks=1 00:04:17.610 00:04:17.610 ' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.610 --rc genhtml_branch_coverage=1 00:04:17.610 --rc genhtml_function_coverage=1 00:04:17.610 --rc genhtml_legend=1 00:04:17.610 --rc geninfo_all_blocks=1 00:04:17.610 --rc geninfo_unexecuted_blocks=1 00:04:17.610 00:04:17.610 ' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.610 --rc genhtml_branch_coverage=1 00:04:17.610 --rc genhtml_function_coverage=1 00:04:17.610 --rc genhtml_legend=1 00:04:17.610 --rc geninfo_all_blocks=1 00:04:17.610 --rc geninfo_unexecuted_blocks=1 00:04:17.610 00:04:17.610 ' 00:04:17.610 09:17:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.610 09:17:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.610 09:17:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.610 09:17:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 START TEST skip_rpc 00:04:17.870 ************************************ 00:04:17.870 09:17:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.870 09:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58290 00:04:17.870 09:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.870 09:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.870 09:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.870 [2024-12-12 09:17:51.742795] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:17.870 [2024-12-12 09:17:51.742912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:04:18.128 [2024-12-12 09:17:51.913408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.128 [2024-12-12 09:17:52.026723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.397 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58290 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58290 ']' 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58290 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58290 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.398 killing process with pid 58290 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58290' 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58290 00:04:23.398 09:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58290 00:04:25.304 00:04:25.304 real 0m7.446s 00:04:25.304 user 0m6.993s 00:04:25.304 sys 0m0.373s 00:04:25.304 09:17:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.304 09:17:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.304 ************************************ 00:04:25.304 END TEST skip_rpc 00:04:25.304 ************************************ 00:04:25.304 09:17:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.304 09:17:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.304 09:17:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.304 09:17:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.304 ************************************ 00:04:25.304 START TEST skip_rpc_with_json 00:04:25.304 ************************************ 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58401 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58401 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58401 ']' 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.304 09:17:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.304 [2024-12-12 09:17:59.254043] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:25.304 [2024-12-12 09:17:59.254178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58401 ] 00:04:25.562 [2024-12-12 09:17:59.429388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.562 [2024-12-12 09:17:59.541790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 [2024-12-12 09:18:00.418090] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.498 request: 00:04:26.498 { 00:04:26.498 "trtype": "tcp", 00:04:26.498 "method": "nvmf_get_transports", 00:04:26.498 "req_id": 1 00:04:26.498 } 00:04:26.498 Got JSON-RPC error response 00:04:26.498 response: 00:04:26.498 { 00:04:26.498 "code": -19, 00:04:26.498 "message": "No such device" 00:04:26.498 } 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 [2024-12-12 09:18:00.430161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.498 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.757 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.757 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.757 { 00:04:26.757 "subsystems": [ 00:04:26.757 { 00:04:26.757 "subsystem": "fsdev", 00:04:26.757 "config": [ 00:04:26.757 { 00:04:26.757 "method": "fsdev_set_opts", 00:04:26.757 "params": { 00:04:26.757 "fsdev_io_pool_size": 65535, 00:04:26.757 "fsdev_io_cache_size": 256 00:04:26.757 } 00:04:26.757 } 00:04:26.757 ] 00:04:26.757 }, 00:04:26.757 { 00:04:26.757 "subsystem": "keyring", 00:04:26.757 "config": [] 00:04:26.757 }, 00:04:26.757 { 00:04:26.757 "subsystem": "iobuf", 00:04:26.757 "config": [ 00:04:26.757 { 00:04:26.757 "method": "iobuf_set_options", 00:04:26.757 "params": { 00:04:26.757 "small_pool_count": 8192, 00:04:26.757 "large_pool_count": 1024, 00:04:26.757 "small_bufsize": 8192, 00:04:26.757 "large_bufsize": 135168, 00:04:26.757 "enable_numa": false 00:04:26.757 } 00:04:26.757 } 00:04:26.757 ] 00:04:26.757 }, 00:04:26.757 { 00:04:26.757 "subsystem": "sock", 00:04:26.757 "config": [ 00:04:26.757 { 00:04:26.757 "method": "sock_set_default_impl", 00:04:26.757 "params": { 00:04:26.757 "impl_name": "posix" 00:04:26.757 } 00:04:26.757 }, 00:04:26.757 { 00:04:26.757 "method": "sock_impl_set_options", 00:04:26.757 "params": { 00:04:26.757 "impl_name": "ssl", 00:04:26.757 "recv_buf_size": 4096, 00:04:26.757 "send_buf_size": 4096, 00:04:26.757 "enable_recv_pipe": true, 00:04:26.757 "enable_quickack": false, 00:04:26.757 "enable_placement_id": 0, 00:04:26.757 "enable_zerocopy_send_server": true, 00:04:26.757 "enable_zerocopy_send_client": false, 00:04:26.757 "zerocopy_threshold": 0, 00:04:26.757 "tls_version": 0, 00:04:26.757 "enable_ktls": false 00:04:26.757 } 00:04:26.757 }, 00:04:26.757 { 00:04:26.757 "method": "sock_impl_set_options", 00:04:26.757 "params": { 00:04:26.757 "impl_name": "posix", 00:04:26.757 "recv_buf_size": 2097152, 00:04:26.757 "send_buf_size": 2097152, 00:04:26.757 "enable_recv_pipe": true, 00:04:26.757 "enable_quickack": false, 00:04:26.757 "enable_placement_id": 0, 00:04:26.758 "enable_zerocopy_send_server": true, 00:04:26.758 "enable_zerocopy_send_client": false, 00:04:26.758 "zerocopy_threshold": 0, 00:04:26.758 "tls_version": 0, 00:04:26.758 "enable_ktls": false 00:04:26.758 } 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "vmd", 00:04:26.758 "config": [] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "accel", 00:04:26.758 "config": [ 00:04:26.758 { 00:04:26.758 "method": "accel_set_options", 00:04:26.758 "params": { 00:04:26.758 "small_cache_size": 128, 00:04:26.758 "large_cache_size": 16, 00:04:26.758 "task_count": 2048, 00:04:26.758 "sequence_count": 2048, 00:04:26.758 "buf_count": 2048 00:04:26.758 } 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "bdev", 00:04:26.758 "config": [ 00:04:26.758 { 00:04:26.758 "method": "bdev_set_options", 00:04:26.758 "params": { 00:04:26.758 "bdev_io_pool_size": 65535, 00:04:26.758 "bdev_io_cache_size": 256, 00:04:26.758 "bdev_auto_examine": true, 00:04:26.758 "iobuf_small_cache_size": 128, 00:04:26.758 "iobuf_large_cache_size": 16 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "bdev_raid_set_options", 00:04:26.758 "params": { 00:04:26.758 "process_window_size_kb": 1024, 00:04:26.758 "process_max_bandwidth_mb_sec": 0 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "bdev_iscsi_set_options", 00:04:26.758 "params": { 00:04:26.758 "timeout_sec": 30 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "bdev_nvme_set_options", 00:04:26.758 "params": { 00:04:26.758 "action_on_timeout": "none", 00:04:26.758 "timeout_us": 0, 00:04:26.758 "timeout_admin_us": 0, 00:04:26.758 "keep_alive_timeout_ms": 10000, 00:04:26.758 "arbitration_burst": 0, 00:04:26.758 "low_priority_weight": 0, 00:04:26.758 "medium_priority_weight": 0, 00:04:26.758 "high_priority_weight": 0, 00:04:26.758 "nvme_adminq_poll_period_us": 10000, 00:04:26.758 "nvme_ioq_poll_period_us": 0, 00:04:26.758 "io_queue_requests": 0, 00:04:26.758 "delay_cmd_submit": true, 00:04:26.758 "transport_retry_count": 4, 00:04:26.758 "bdev_retry_count": 3, 00:04:26.758 "transport_ack_timeout": 0, 00:04:26.758 "ctrlr_loss_timeout_sec": 0, 00:04:26.758 "reconnect_delay_sec": 0, 00:04:26.758 "fast_io_fail_timeout_sec": 0, 00:04:26.758 "disable_auto_failback": false, 00:04:26.758 "generate_uuids": false, 00:04:26.758 "transport_tos": 0, 00:04:26.758 "nvme_error_stat": false, 00:04:26.758 "rdma_srq_size": 0, 00:04:26.758 "io_path_stat": false, 00:04:26.758 "allow_accel_sequence": false, 00:04:26.758 "rdma_max_cq_size": 0, 00:04:26.758 "rdma_cm_event_timeout_ms": 0, 00:04:26.758 "dhchap_digests": [ 00:04:26.758 "sha256", 00:04:26.758 "sha384", 00:04:26.758 "sha512" 00:04:26.758 ], 00:04:26.758 "dhchap_dhgroups": [ 00:04:26.758 "null", 00:04:26.758 "ffdhe2048", 00:04:26.758 "ffdhe3072", 00:04:26.758 "ffdhe4096", 00:04:26.758 "ffdhe6144", 00:04:26.758 "ffdhe8192" 00:04:26.758 ], 00:04:26.758 "rdma_umr_per_io": false 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "bdev_nvme_set_hotplug", 00:04:26.758 "params": { 00:04:26.758 "period_us": 100000, 00:04:26.758 "enable": false 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "bdev_wait_for_examine" 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "scsi", 00:04:26.758 "config": null 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "scheduler", 00:04:26.758 "config": [ 00:04:26.758 { 00:04:26.758 "method": "framework_set_scheduler", 00:04:26.758 "params": { 00:04:26.758 "name": "static" 00:04:26.758 } 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "vhost_scsi", 00:04:26.758 "config": [] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "vhost_blk", 00:04:26.758 "config": [] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "ublk", 00:04:26.758 "config": [] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "nbd", 00:04:26.758 "config": [] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "nvmf", 00:04:26.758 "config": [ 00:04:26.758 { 00:04:26.758 "method": "nvmf_set_config", 00:04:26.758 "params": { 00:04:26.758 "discovery_filter": "match_any", 00:04:26.758 "admin_cmd_passthru": { 00:04:26.758 "identify_ctrlr": false 00:04:26.758 }, 00:04:26.758 "dhchap_digests": [ 00:04:26.758 "sha256", 00:04:26.758 "sha384", 00:04:26.758 "sha512" 00:04:26.758 ], 00:04:26.758 "dhchap_dhgroups": [ 00:04:26.758 "null", 00:04:26.758 "ffdhe2048", 00:04:26.758 "ffdhe3072", 00:04:26.758 "ffdhe4096", 00:04:26.758 "ffdhe6144", 00:04:26.758 "ffdhe8192" 00:04:26.758 ] 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "nvmf_set_max_subsystems", 00:04:26.758 "params": { 00:04:26.758 "max_subsystems": 1024 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "nvmf_set_crdt", 00:04:26.758 "params": { 00:04:26.758 "crdt1": 0, 00:04:26.758 "crdt2": 0, 00:04:26.758 "crdt3": 0 00:04:26.758 } 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "method": "nvmf_create_transport", 00:04:26.758 "params": { 00:04:26.758 "trtype": "TCP", 00:04:26.758 "max_queue_depth": 128, 00:04:26.758 "max_io_qpairs_per_ctrlr": 127, 00:04:26.758 "in_capsule_data_size": 4096, 00:04:26.758 "max_io_size": 131072, 00:04:26.758 "io_unit_size": 131072, 00:04:26.758 "max_aq_depth": 128, 00:04:26.758 "num_shared_buffers": 511, 00:04:26.758 "buf_cache_size": 4294967295, 00:04:26.758 "dif_insert_or_strip": false, 00:04:26.758 "zcopy": false, 00:04:26.758 "c2h_success": true, 00:04:26.758 "sock_priority": 0, 00:04:26.758 "abort_timeout_sec": 1, 00:04:26.758 "ack_timeout": 0, 00:04:26.758 "data_wr_pool_size": 0 00:04:26.758 } 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 }, 00:04:26.758 { 00:04:26.758 "subsystem": "iscsi", 00:04:26.758 "config": [ 00:04:26.758 { 00:04:26.758 "method": "iscsi_set_options", 00:04:26.758 "params": { 00:04:26.758 "node_base": "iqn.2016-06.io.spdk", 00:04:26.758 "max_sessions": 128, 00:04:26.758 "max_connections_per_session": 2, 00:04:26.758 "max_queue_depth": 64, 00:04:26.758 "default_time2wait": 2, 00:04:26.758 "default_time2retain": 20, 00:04:26.758 "first_burst_length": 8192, 00:04:26.758 "immediate_data": true, 00:04:26.758 "allow_duplicated_isid": false, 00:04:26.758 "error_recovery_level": 0, 00:04:26.758 "nop_timeout": 60, 00:04:26.758 "nop_in_interval": 30, 00:04:26.758 "disable_chap": false, 00:04:26.758 "require_chap": false, 00:04:26.758 "mutual_chap": false, 00:04:26.758 "chap_group": 0, 00:04:26.758 "max_large_datain_per_connection": 64, 00:04:26.758 "max_r2t_per_connection": 4, 00:04:26.758 "pdu_pool_size": 36864, 00:04:26.758 "immediate_data_pool_size": 16384, 00:04:26.758 "data_out_pool_size": 2048 00:04:26.758 } 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 } 00:04:26.758 ] 00:04:26.758 } 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58401 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58401 ']' 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58401 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58401 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.758 killing process with pid 58401 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58401' 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58401 00:04:26.758 09:18:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58401 00:04:29.290 09:18:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58446 00:04:29.290 09:18:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.290 09:18:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58446 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58446 ']' 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58446 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58446 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.562 killing process with pid 58446 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58446' 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58446 00:04:34.562 09:18:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58446 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.511 00:04:36.511 real 0m11.312s 00:04:36.511 user 0m10.787s 00:04:36.511 sys 0m0.815s 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.511 ************************************ 00:04:36.511 END TEST skip_rpc_with_json 00:04:36.511 ************************************ 00:04:36.511 09:18:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:36.511 09:18:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.511 09:18:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.511 09:18:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.511 ************************************ 00:04:36.511 START TEST skip_rpc_with_delay 00:04:36.511 ************************************ 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:36.511 09:18:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.771 [2024-12-12 09:18:10.638991] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.771 00:04:36.771 real 0m0.172s 00:04:36.771 user 0m0.088s 00:04:36.771 sys 0m0.082s 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.771 09:18:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:36.771 ************************************ 00:04:36.771 END TEST skip_rpc_with_delay 00:04:36.771 ************************************ 00:04:36.771 09:18:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:36.771 09:18:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:36.771 09:18:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:36.771 09:18:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.771 09:18:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.771 09:18:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.771 ************************************ 00:04:36.772 START TEST exit_on_failed_rpc_init 00:04:36.772 ************************************ 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58585 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58585 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58585 ']' 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.772 09:18:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.030 [2024-12-12 09:18:10.877377] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:37.030 [2024-12-12 09:18:10.877518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58585 ] 00:04:37.030 [2024-12-12 09:18:11.043075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.290 [2024-12-12 09:18:11.157613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.229 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.229 [2024-12-12 09:18:12.143422] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:38.229 [2024-12-12 09:18:12.143559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58603 ] 00:04:38.488 [2024-12-12 09:18:12.319374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.488 [2024-12-12 09:18:12.464523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.488 [2024-12-12 09:18:12.464637] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:38.488 [2024-12-12 09:18:12.464651] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:38.488 [2024-12-12 09:18:12.464663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58585 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58585 ']' 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58585 00:04:38.747 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58585 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.007 killing process with pid 58585 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58585' 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58585 00:04:39.007 09:18:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58585 00:04:41.545 00:04:41.545 real 0m4.395s 00:04:41.545 user 0m4.784s 00:04:41.545 sys 0m0.561s 00:04:41.545 09:18:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.545 09:18:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.545 ************************************ 00:04:41.545 END TEST exit_on_failed_rpc_init 00:04:41.545 ************************************ 00:04:41.545 09:18:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.545 00:04:41.545 real 0m23.818s 00:04:41.545 user 0m22.863s 00:04:41.545 sys 0m2.133s 00:04:41.545 09:18:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.545 09:18:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.545 ************************************ 00:04:41.545 END TEST skip_rpc 00:04:41.545 ************************************ 00:04:41.545 09:18:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.545 09:18:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.545 09:18:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.545 09:18:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.545 ************************************ 00:04:41.545 START TEST rpc_client 00:04:41.545 ************************************ 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.545 * Looking for test storage... 00:04:41.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.545 09:18:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.545 --rc genhtml_branch_coverage=1 00:04:41.545 --rc genhtml_function_coverage=1 00:04:41.545 --rc genhtml_legend=1 00:04:41.545 --rc geninfo_all_blocks=1 00:04:41.545 --rc geninfo_unexecuted_blocks=1 00:04:41.545 00:04:41.545 ' 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.545 --rc genhtml_branch_coverage=1 00:04:41.545 --rc genhtml_function_coverage=1 00:04:41.545 --rc genhtml_legend=1 00:04:41.545 --rc geninfo_all_blocks=1 00:04:41.545 --rc geninfo_unexecuted_blocks=1 00:04:41.545 00:04:41.545 ' 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.545 --rc genhtml_branch_coverage=1 00:04:41.545 --rc genhtml_function_coverage=1 00:04:41.545 --rc genhtml_legend=1 00:04:41.545 --rc geninfo_all_blocks=1 00:04:41.545 --rc geninfo_unexecuted_blocks=1 00:04:41.545 00:04:41.545 ' 00:04:41.545 09:18:15 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.546 --rc genhtml_branch_coverage=1 00:04:41.546 --rc genhtml_function_coverage=1 00:04:41.546 --rc genhtml_legend=1 00:04:41.546 --rc geninfo_all_blocks=1 00:04:41.546 --rc geninfo_unexecuted_blocks=1 00:04:41.546 00:04:41.546 ' 00:04:41.546 09:18:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.546 OK 00:04:41.806 09:18:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.806 00:04:41.806 real 0m0.290s 00:04:41.806 user 0m0.146s 00:04:41.806 sys 0m0.159s 00:04:41.806 09:18:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.806 09:18:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.806 ************************************ 00:04:41.806 END TEST rpc_client 00:04:41.806 ************************************ 00:04:41.806 09:18:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.806 09:18:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.806 09:18:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.806 09:18:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.806 ************************************ 00:04:41.806 START TEST json_config 00:04:41.806 ************************************ 00:04:41.806 09:18:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.806 09:18:15 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.806 09:18:15 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.806 09:18:15 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.806 09:18:15 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.806 09:18:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.806 09:18:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.806 09:18:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.806 09:18:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.806 09:18:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.806 09:18:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.806 09:18:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.806 09:18:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.806 09:18:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.806 09:18:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.806 09:18:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.806 09:18:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.806 09:18:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.066 09:18:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.067 09:18:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.067 09:18:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.067 09:18:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.067 --rc genhtml_branch_coverage=1 00:04:42.067 --rc genhtml_function_coverage=1 00:04:42.067 --rc genhtml_legend=1 00:04:42.067 --rc geninfo_all_blocks=1 00:04:42.067 --rc geninfo_unexecuted_blocks=1 00:04:42.067 00:04:42.067 ' 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.067 --rc genhtml_branch_coverage=1 00:04:42.067 --rc genhtml_function_coverage=1 00:04:42.067 --rc genhtml_legend=1 00:04:42.067 --rc geninfo_all_blocks=1 00:04:42.067 --rc geninfo_unexecuted_blocks=1 00:04:42.067 00:04:42.067 ' 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.067 --rc genhtml_branch_coverage=1 00:04:42.067 --rc genhtml_function_coverage=1 00:04:42.067 --rc genhtml_legend=1 00:04:42.067 --rc geninfo_all_blocks=1 00:04:42.067 --rc geninfo_unexecuted_blocks=1 00:04:42.067 00:04:42.067 ' 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.067 --rc genhtml_branch_coverage=1 00:04:42.067 --rc genhtml_function_coverage=1 00:04:42.067 --rc genhtml_legend=1 00:04:42.067 --rc geninfo_all_blocks=1 00:04:42.067 --rc geninfo_unexecuted_blocks=1 00:04:42.067 00:04:42.067 ' 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4b033963-8381-4c36-8d4b-2a6d498e4080 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4b033963-8381-4c36-8d4b-2a6d498e4080 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.067 09:18:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.067 09:18:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.067 09:18:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.067 09:18:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.067 09:18:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.067 09:18:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.067 09:18:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.067 09:18:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.067 09:18:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.067 09:18:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:42.067 WARNING: No tests are enabled so not running JSON configuration tests 00:04:42.067 09:18:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:42.067 00:04:42.067 real 0m0.231s 00:04:42.067 user 0m0.144s 00:04:42.067 sys 0m0.093s 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.067 09:18:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.067 ************************************ 00:04:42.067 END TEST json_config 00:04:42.067 ************************************ 00:04:42.067 09:18:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.067 09:18:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.067 09:18:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.067 09:18:15 -- common/autotest_common.sh@10 -- # set +x 00:04:42.067 ************************************ 00:04:42.067 START TEST json_config_extra_key 00:04:42.067 ************************************ 00:04:42.067 09:18:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.067 09:18:16 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.067 09:18:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.067 09:18:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.328 09:18:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.328 09:18:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.329 --rc genhtml_branch_coverage=1 00:04:42.329 --rc genhtml_function_coverage=1 00:04:42.329 --rc genhtml_legend=1 00:04:42.329 --rc geninfo_all_blocks=1 00:04:42.329 --rc geninfo_unexecuted_blocks=1 00:04:42.329 00:04:42.329 ' 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.329 --rc genhtml_branch_coverage=1 00:04:42.329 --rc genhtml_function_coverage=1 00:04:42.329 --rc genhtml_legend=1 00:04:42.329 --rc geninfo_all_blocks=1 00:04:42.329 --rc geninfo_unexecuted_blocks=1 00:04:42.329 00:04:42.329 ' 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.329 --rc genhtml_branch_coverage=1 00:04:42.329 --rc genhtml_function_coverage=1 00:04:42.329 --rc genhtml_legend=1 00:04:42.329 --rc geninfo_all_blocks=1 00:04:42.329 --rc geninfo_unexecuted_blocks=1 00:04:42.329 00:04:42.329 ' 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.329 --rc genhtml_branch_coverage=1 00:04:42.329 --rc genhtml_function_coverage=1 00:04:42.329 --rc genhtml_legend=1 00:04:42.329 --rc geninfo_all_blocks=1 00:04:42.329 --rc geninfo_unexecuted_blocks=1 00:04:42.329 00:04:42.329 ' 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4b033963-8381-4c36-8d4b-2a6d498e4080 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4b033963-8381-4c36-8d4b-2a6d498e4080 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.329 09:18:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.329 09:18:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.329 09:18:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.329 09:18:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.329 09:18:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.329 09:18:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.329 09:18:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.329 INFO: launching applications... 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.329 09:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58813 00:04:42.329 Waiting for target to run... 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58813 /var/tmp/spdk_tgt.sock 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58813 ']' 00:04:42.329 09:18:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.329 09:18:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.329 [2024-12-12 09:18:16.309506] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:42.329 [2024-12-12 09:18:16.310103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:04:42.899 [2024-12-12 09:18:16.712867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.899 [2024-12-12 09:18:16.821593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.836 09:18:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.836 09:18:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:43.836 00:04:43.836 09:18:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.837 INFO: shutting down applications... 00:04:43.837 09:18:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.837 09:18:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58813 ]] 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58813 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:43.837 09:18:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.096 09:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.096 09:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.096 09:18:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:44.096 09:18:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.665 09:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.665 09:18:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.665 09:18:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:44.665 09:18:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.235 09:18:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.235 09:18:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.235 09:18:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:45.235 09:18:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.803 09:18:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.803 09:18:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.803 09:18:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:45.803 09:18:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.373 09:18:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.373 09:18:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.373 09:18:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:46.373 09:18:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58813 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.633 SPDK target shutdown done 00:04:46.633 09:18:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.633 Success 00:04:46.633 09:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.633 00:04:46.633 real 0m4.668s 00:04:46.633 user 0m4.154s 00:04:46.633 sys 0m0.587s 00:04:46.633 09:18:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.633 09:18:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.633 ************************************ 00:04:46.633 END TEST json_config_extra_key 00:04:46.633 ************************************ 00:04:46.893 09:18:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.893 09:18:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.893 09:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.893 09:18:20 -- common/autotest_common.sh@10 -- # set +x 00:04:46.893 ************************************ 00:04:46.893 START TEST alias_rpc 00:04:46.893 ************************************ 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.893 * Looking for test storage... 00:04:46.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.893 09:18:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.893 09:18:20 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.893 --rc genhtml_branch_coverage=1 00:04:46.893 --rc genhtml_function_coverage=1 00:04:46.893 --rc genhtml_legend=1 00:04:46.893 --rc geninfo_all_blocks=1 00:04:46.893 --rc geninfo_unexecuted_blocks=1 00:04:46.894 00:04:46.894 ' 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.894 --rc genhtml_branch_coverage=1 00:04:46.894 --rc genhtml_function_coverage=1 00:04:46.894 --rc genhtml_legend=1 00:04:46.894 --rc geninfo_all_blocks=1 00:04:46.894 --rc geninfo_unexecuted_blocks=1 00:04:46.894 00:04:46.894 ' 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.894 --rc genhtml_branch_coverage=1 00:04:46.894 --rc genhtml_function_coverage=1 00:04:46.894 --rc genhtml_legend=1 00:04:46.894 --rc geninfo_all_blocks=1 00:04:46.894 --rc geninfo_unexecuted_blocks=1 00:04:46.894 00:04:46.894 ' 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.894 --rc genhtml_branch_coverage=1 00:04:46.894 --rc genhtml_function_coverage=1 00:04:46.894 --rc genhtml_legend=1 00:04:46.894 --rc geninfo_all_blocks=1 00:04:46.894 --rc geninfo_unexecuted_blocks=1 00:04:46.894 00:04:46.894 ' 00:04:46.894 09:18:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.894 09:18:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58925 00:04:46.894 09:18:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.894 09:18:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58925 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58925 ']' 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.894 09:18:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.154 [2024-12-12 09:18:20.972243] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:47.154 [2024-12-12 09:18:20.972673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:04:47.154 [2024-12-12 09:18:21.144110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.413 [2024-12-12 09:18:21.261997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.359 09:18:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.359 09:18:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.359 09:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:48.619 09:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58925 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58925 ']' 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58925 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58925 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.619 killing process with pid 58925 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58925' 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 58925 00:04:48.619 09:18:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 58925 00:04:51.156 00:04:51.156 real 0m4.233s 00:04:51.156 user 0m4.290s 00:04:51.156 sys 0m0.562s 00:04:51.156 09:18:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.156 09:18:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.156 ************************************ 00:04:51.156 END TEST alias_rpc 00:04:51.156 ************************************ 00:04:51.156 09:18:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:51.156 09:18:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.156 09:18:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.156 09:18:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.156 09:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.156 ************************************ 00:04:51.156 START TEST spdkcli_tcp 00:04:51.156 ************************************ 00:04:51.156 09:18:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.156 * Looking for test storage... 00:04:51.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:51.156 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.156 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.156 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.416 09:18:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.416 --rc genhtml_branch_coverage=1 00:04:51.416 --rc genhtml_function_coverage=1 00:04:51.416 --rc genhtml_legend=1 00:04:51.416 --rc geninfo_all_blocks=1 00:04:51.416 --rc geninfo_unexecuted_blocks=1 00:04:51.416 00:04:51.416 ' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.416 --rc genhtml_branch_coverage=1 00:04:51.416 --rc genhtml_function_coverage=1 00:04:51.416 --rc genhtml_legend=1 00:04:51.416 --rc geninfo_all_blocks=1 00:04:51.416 --rc geninfo_unexecuted_blocks=1 00:04:51.416 00:04:51.416 ' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.416 --rc genhtml_branch_coverage=1 00:04:51.416 --rc genhtml_function_coverage=1 00:04:51.416 --rc genhtml_legend=1 00:04:51.416 --rc geninfo_all_blocks=1 00:04:51.416 --rc geninfo_unexecuted_blocks=1 00:04:51.416 00:04:51.416 ' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.416 --rc genhtml_branch_coverage=1 00:04:51.416 --rc genhtml_function_coverage=1 00:04:51.416 --rc genhtml_legend=1 00:04:51.416 --rc geninfo_all_blocks=1 00:04:51.416 --rc geninfo_unexecuted_blocks=1 00:04:51.416 00:04:51.416 ' 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59032 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.416 09:18:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59032 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.416 09:18:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.416 [2024-12-12 09:18:25.331107] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:51.416 [2024-12-12 09:18:25.331800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:04:51.675 [2024-12-12 09:18:25.515687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.675 [2024-12-12 09:18:25.636023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.675 [2024-12-12 09:18:25.636064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.612 09:18:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.612 09:18:26 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:52.612 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59054 00:04:52.612 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.612 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.872 [ 00:04:52.872 "bdev_malloc_delete", 00:04:52.872 "bdev_malloc_create", 00:04:52.872 "bdev_null_resize", 00:04:52.872 "bdev_null_delete", 00:04:52.872 "bdev_null_create", 00:04:52.872 "bdev_nvme_cuse_unregister", 00:04:52.872 "bdev_nvme_cuse_register", 00:04:52.872 "bdev_opal_new_user", 00:04:52.872 "bdev_opal_set_lock_state", 00:04:52.872 "bdev_opal_delete", 00:04:52.872 "bdev_opal_get_info", 00:04:52.872 "bdev_opal_create", 00:04:52.872 "bdev_nvme_opal_revert", 00:04:52.872 "bdev_nvme_opal_init", 00:04:52.872 "bdev_nvme_send_cmd", 00:04:52.872 "bdev_nvme_set_keys", 00:04:52.872 "bdev_nvme_get_path_iostat", 00:04:52.872 "bdev_nvme_get_mdns_discovery_info", 00:04:52.872 "bdev_nvme_stop_mdns_discovery", 00:04:52.872 "bdev_nvme_start_mdns_discovery", 00:04:52.872 "bdev_nvme_set_multipath_policy", 00:04:52.872 "bdev_nvme_set_preferred_path", 00:04:52.872 "bdev_nvme_get_io_paths", 00:04:52.872 "bdev_nvme_remove_error_injection", 00:04:52.872 "bdev_nvme_add_error_injection", 00:04:52.872 "bdev_nvme_get_discovery_info", 00:04:52.872 "bdev_nvme_stop_discovery", 00:04:52.872 "bdev_nvme_start_discovery", 00:04:52.872 "bdev_nvme_get_controller_health_info", 00:04:52.872 "bdev_nvme_disable_controller", 00:04:52.872 "bdev_nvme_enable_controller", 00:04:52.872 "bdev_nvme_reset_controller", 00:04:52.872 "bdev_nvme_get_transport_statistics", 00:04:52.872 "bdev_nvme_apply_firmware", 00:04:52.872 "bdev_nvme_detach_controller", 00:04:52.872 "bdev_nvme_get_controllers", 00:04:52.872 "bdev_nvme_attach_controller", 00:04:52.872 "bdev_nvme_set_hotplug", 00:04:52.872 "bdev_nvme_set_options", 00:04:52.872 "bdev_passthru_delete", 00:04:52.872 "bdev_passthru_create", 00:04:52.872 "bdev_lvol_set_parent_bdev", 00:04:52.872 "bdev_lvol_set_parent", 00:04:52.872 "bdev_lvol_check_shallow_copy", 00:04:52.872 "bdev_lvol_start_shallow_copy", 00:04:52.872 "bdev_lvol_grow_lvstore", 00:04:52.872 "bdev_lvol_get_lvols", 00:04:52.872 "bdev_lvol_get_lvstores", 00:04:52.872 "bdev_lvol_delete", 00:04:52.872 "bdev_lvol_set_read_only", 00:04:52.872 "bdev_lvol_resize", 00:04:52.872 "bdev_lvol_decouple_parent", 00:04:52.872 "bdev_lvol_inflate", 00:04:52.872 "bdev_lvol_rename", 00:04:52.872 "bdev_lvol_clone_bdev", 00:04:52.872 "bdev_lvol_clone", 00:04:52.872 "bdev_lvol_snapshot", 00:04:52.872 "bdev_lvol_create", 00:04:52.872 "bdev_lvol_delete_lvstore", 00:04:52.872 "bdev_lvol_rename_lvstore", 00:04:52.872 "bdev_lvol_create_lvstore", 00:04:52.872 "bdev_raid_set_options", 00:04:52.872 "bdev_raid_remove_base_bdev", 00:04:52.872 "bdev_raid_add_base_bdev", 00:04:52.872 "bdev_raid_delete", 00:04:52.872 "bdev_raid_create", 00:04:52.872 "bdev_raid_get_bdevs", 00:04:52.872 "bdev_error_inject_error", 00:04:52.872 "bdev_error_delete", 00:04:52.872 "bdev_error_create", 00:04:52.872 "bdev_split_delete", 00:04:52.872 "bdev_split_create", 00:04:52.872 "bdev_delay_delete", 00:04:52.872 "bdev_delay_create", 00:04:52.872 "bdev_delay_update_latency", 00:04:52.872 "bdev_zone_block_delete", 00:04:52.872 "bdev_zone_block_create", 00:04:52.872 "blobfs_create", 00:04:52.872 "blobfs_detect", 00:04:52.872 "blobfs_set_cache_size", 00:04:52.872 "bdev_aio_delete", 00:04:52.872 "bdev_aio_rescan", 00:04:52.872 "bdev_aio_create", 00:04:52.872 "bdev_ftl_set_property", 00:04:52.872 "bdev_ftl_get_properties", 00:04:52.872 "bdev_ftl_get_stats", 00:04:52.872 "bdev_ftl_unmap", 00:04:52.872 "bdev_ftl_unload", 00:04:52.872 "bdev_ftl_delete", 00:04:52.872 "bdev_ftl_load", 00:04:52.872 "bdev_ftl_create", 00:04:52.872 "bdev_virtio_attach_controller", 00:04:52.872 "bdev_virtio_scsi_get_devices", 00:04:52.872 "bdev_virtio_detach_controller", 00:04:52.872 "bdev_virtio_blk_set_hotplug", 00:04:52.872 "bdev_iscsi_delete", 00:04:52.872 "bdev_iscsi_create", 00:04:52.872 "bdev_iscsi_set_options", 00:04:52.872 "accel_error_inject_error", 00:04:52.872 "ioat_scan_accel_module", 00:04:52.872 "dsa_scan_accel_module", 00:04:52.872 "iaa_scan_accel_module", 00:04:52.872 "keyring_file_remove_key", 00:04:52.872 "keyring_file_add_key", 00:04:52.872 "keyring_linux_set_options", 00:04:52.872 "fsdev_aio_delete", 00:04:52.872 "fsdev_aio_create", 00:04:52.872 "iscsi_get_histogram", 00:04:52.872 "iscsi_enable_histogram", 00:04:52.872 "iscsi_set_options", 00:04:52.872 "iscsi_get_auth_groups", 00:04:52.872 "iscsi_auth_group_remove_secret", 00:04:52.872 "iscsi_auth_group_add_secret", 00:04:52.872 "iscsi_delete_auth_group", 00:04:52.872 "iscsi_create_auth_group", 00:04:52.872 "iscsi_set_discovery_auth", 00:04:52.872 "iscsi_get_options", 00:04:52.872 "iscsi_target_node_request_logout", 00:04:52.872 "iscsi_target_node_set_redirect", 00:04:52.872 "iscsi_target_node_set_auth", 00:04:52.872 "iscsi_target_node_add_lun", 00:04:52.872 "iscsi_get_stats", 00:04:52.872 "iscsi_get_connections", 00:04:52.872 "iscsi_portal_group_set_auth", 00:04:52.872 "iscsi_start_portal_group", 00:04:52.872 "iscsi_delete_portal_group", 00:04:52.872 "iscsi_create_portal_group", 00:04:52.872 "iscsi_get_portal_groups", 00:04:52.872 "iscsi_delete_target_node", 00:04:52.872 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.872 "iscsi_target_node_add_pg_ig_maps", 00:04:52.872 "iscsi_create_target_node", 00:04:52.872 "iscsi_get_target_nodes", 00:04:52.872 "iscsi_delete_initiator_group", 00:04:52.872 "iscsi_initiator_group_remove_initiators", 00:04:52.872 "iscsi_initiator_group_add_initiators", 00:04:52.872 "iscsi_create_initiator_group", 00:04:52.872 "iscsi_get_initiator_groups", 00:04:52.872 "nvmf_set_crdt", 00:04:52.872 "nvmf_set_config", 00:04:52.872 "nvmf_set_max_subsystems", 00:04:52.872 "nvmf_stop_mdns_prr", 00:04:52.872 "nvmf_publish_mdns_prr", 00:04:52.872 "nvmf_subsystem_get_listeners", 00:04:52.872 "nvmf_subsystem_get_qpairs", 00:04:52.872 "nvmf_subsystem_get_controllers", 00:04:52.872 "nvmf_get_stats", 00:04:52.872 "nvmf_get_transports", 00:04:52.872 "nvmf_create_transport", 00:04:52.872 "nvmf_get_targets", 00:04:52.872 "nvmf_delete_target", 00:04:52.872 "nvmf_create_target", 00:04:52.872 "nvmf_subsystem_allow_any_host", 00:04:52.872 "nvmf_subsystem_set_keys", 00:04:52.872 "nvmf_subsystem_remove_host", 00:04:52.872 "nvmf_subsystem_add_host", 00:04:52.872 "nvmf_ns_remove_host", 00:04:52.872 "nvmf_ns_add_host", 00:04:52.872 "nvmf_subsystem_remove_ns", 00:04:52.872 "nvmf_subsystem_set_ns_ana_group", 00:04:52.872 "nvmf_subsystem_add_ns", 00:04:52.872 "nvmf_subsystem_listener_set_ana_state", 00:04:52.872 "nvmf_discovery_get_referrals", 00:04:52.872 "nvmf_discovery_remove_referral", 00:04:52.872 "nvmf_discovery_add_referral", 00:04:52.872 "nvmf_subsystem_remove_listener", 00:04:52.872 "nvmf_subsystem_add_listener", 00:04:52.872 "nvmf_delete_subsystem", 00:04:52.872 "nvmf_create_subsystem", 00:04:52.872 "nvmf_get_subsystems", 00:04:52.872 "env_dpdk_get_mem_stats", 00:04:52.872 "nbd_get_disks", 00:04:52.872 "nbd_stop_disk", 00:04:52.872 "nbd_start_disk", 00:04:52.872 "ublk_recover_disk", 00:04:52.872 "ublk_get_disks", 00:04:52.872 "ublk_stop_disk", 00:04:52.872 "ublk_start_disk", 00:04:52.872 "ublk_destroy_target", 00:04:52.872 "ublk_create_target", 00:04:52.872 "virtio_blk_create_transport", 00:04:52.872 "virtio_blk_get_transports", 00:04:52.872 "vhost_controller_set_coalescing", 00:04:52.872 "vhost_get_controllers", 00:04:52.872 "vhost_delete_controller", 00:04:52.872 "vhost_create_blk_controller", 00:04:52.872 "vhost_scsi_controller_remove_target", 00:04:52.872 "vhost_scsi_controller_add_target", 00:04:52.872 "vhost_start_scsi_controller", 00:04:52.872 "vhost_create_scsi_controller", 00:04:52.872 "thread_set_cpumask", 00:04:52.872 "scheduler_set_options", 00:04:52.872 "framework_get_governor", 00:04:52.872 "framework_get_scheduler", 00:04:52.872 "framework_set_scheduler", 00:04:52.872 "framework_get_reactors", 00:04:52.872 "thread_get_io_channels", 00:04:52.872 "thread_get_pollers", 00:04:52.872 "thread_get_stats", 00:04:52.872 "framework_monitor_context_switch", 00:04:52.872 "spdk_kill_instance", 00:04:52.872 "log_enable_timestamps", 00:04:52.872 "log_get_flags", 00:04:52.872 "log_clear_flag", 00:04:52.872 "log_set_flag", 00:04:52.872 "log_get_level", 00:04:52.872 "log_set_level", 00:04:52.872 "log_get_print_level", 00:04:52.872 "log_set_print_level", 00:04:52.872 "framework_enable_cpumask_locks", 00:04:52.872 "framework_disable_cpumask_locks", 00:04:52.872 "framework_wait_init", 00:04:52.872 "framework_start_init", 00:04:52.872 "scsi_get_devices", 00:04:52.872 "bdev_get_histogram", 00:04:52.872 "bdev_enable_histogram", 00:04:52.872 "bdev_set_qos_limit", 00:04:52.872 "bdev_set_qd_sampling_period", 00:04:52.872 "bdev_get_bdevs", 00:04:52.872 "bdev_reset_iostat", 00:04:52.872 "bdev_get_iostat", 00:04:52.872 "bdev_examine", 00:04:52.872 "bdev_wait_for_examine", 00:04:52.872 "bdev_set_options", 00:04:52.872 "accel_get_stats", 00:04:52.872 "accel_set_options", 00:04:52.872 "accel_set_driver", 00:04:52.872 "accel_crypto_key_destroy", 00:04:52.872 "accel_crypto_keys_get", 00:04:52.872 "accel_crypto_key_create", 00:04:52.872 "accel_assign_opc", 00:04:52.872 "accel_get_module_info", 00:04:52.872 "accel_get_opc_assignments", 00:04:52.872 "vmd_rescan", 00:04:52.872 "vmd_remove_device", 00:04:52.872 "vmd_enable", 00:04:52.872 "sock_get_default_impl", 00:04:52.872 "sock_set_default_impl", 00:04:52.872 "sock_impl_set_options", 00:04:52.872 "sock_impl_get_options", 00:04:52.872 "iobuf_get_stats", 00:04:52.872 "iobuf_set_options", 00:04:52.872 "keyring_get_keys", 00:04:52.872 "framework_get_pci_devices", 00:04:52.873 "framework_get_config", 00:04:52.873 "framework_get_subsystems", 00:04:52.873 "fsdev_set_opts", 00:04:52.873 "fsdev_get_opts", 00:04:52.873 "trace_get_info", 00:04:52.873 "trace_get_tpoint_group_mask", 00:04:52.873 "trace_disable_tpoint_group", 00:04:52.873 "trace_enable_tpoint_group", 00:04:52.873 "trace_clear_tpoint_mask", 00:04:52.873 "trace_set_tpoint_mask", 00:04:52.873 "notify_get_notifications", 00:04:52.873 "notify_get_types", 00:04:52.873 "spdk_get_version", 00:04:52.873 "rpc_get_methods" 00:04:52.873 ] 00:04:52.873 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.873 09:18:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59032 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59032 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59032 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.873 killing process with pid 59032 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59032' 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59032 00:04:52.873 09:18:26 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59032 00:04:55.441 00:04:55.441 real 0m4.199s 00:04:55.441 user 0m7.422s 00:04:55.441 sys 0m0.650s 00:04:55.441 09:18:29 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.441 09:18:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.441 ************************************ 00:04:55.441 END TEST spdkcli_tcp 00:04:55.441 ************************************ 00:04:55.441 09:18:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.441 09:18:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.441 09:18:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.441 09:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.441 ************************************ 00:04:55.441 START TEST dpdk_mem_utility 00:04:55.441 ************************************ 00:04:55.441 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.441 * Looking for test storage... 00:04:55.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:55.441 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.441 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.441 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.441 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.441 09:18:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.442 09:18:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.442 09:18:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.700 09:18:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.700 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.700 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.700 --rc genhtml_branch_coverage=1 00:04:55.700 --rc genhtml_function_coverage=1 00:04:55.700 --rc genhtml_legend=1 00:04:55.700 --rc geninfo_all_blocks=1 00:04:55.700 --rc geninfo_unexecuted_blocks=1 00:04:55.700 00:04:55.700 ' 00:04:55.700 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.700 --rc genhtml_branch_coverage=1 00:04:55.700 --rc genhtml_function_coverage=1 00:04:55.700 --rc genhtml_legend=1 00:04:55.700 --rc geninfo_all_blocks=1 00:04:55.700 --rc geninfo_unexecuted_blocks=1 00:04:55.700 00:04:55.700 ' 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.701 --rc genhtml_branch_coverage=1 00:04:55.701 --rc genhtml_function_coverage=1 00:04:55.701 --rc genhtml_legend=1 00:04:55.701 --rc geninfo_all_blocks=1 00:04:55.701 --rc geninfo_unexecuted_blocks=1 00:04:55.701 00:04:55.701 ' 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.701 --rc genhtml_branch_coverage=1 00:04:55.701 --rc genhtml_function_coverage=1 00:04:55.701 --rc genhtml_legend=1 00:04:55.701 --rc geninfo_all_blocks=1 00:04:55.701 --rc geninfo_unexecuted_blocks=1 00:04:55.701 00:04:55.701 ' 00:04:55.701 09:18:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.701 09:18:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59154 00:04:55.701 09:18:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.701 09:18:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59154 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59154 ']' 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.701 09:18:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.701 [2024-12-12 09:18:29.588255] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:55.701 [2024-12-12 09:18:29.588414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:04:55.960 [2024-12-12 09:18:29.772941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.960 [2024-12-12 09:18:29.888805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.899 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.899 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:56.899 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.899 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.899 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.899 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.899 { 00:04:56.899 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.899 } 00:04:56.899 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.899 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.899 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:56.899 1 heaps totaling size 824.000000 MiB 00:04:56.899 size: 824.000000 MiB heap id: 0 00:04:56.899 end heaps---------- 00:04:56.899 9 mempools totaling size 603.782043 MiB 00:04:56.899 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.899 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.899 size: 100.555481 MiB name: bdev_io_59154 00:04:56.899 size: 50.003479 MiB name: msgpool_59154 00:04:56.899 size: 36.509338 MiB name: fsdev_io_59154 00:04:56.899 size: 21.763794 MiB name: PDU_Pool 00:04:56.899 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.899 size: 4.133484 MiB name: evtpool_59154 00:04:56.899 size: 0.026123 MiB name: Session_Pool 00:04:56.899 end mempools------- 00:04:56.899 6 memzones totaling size 4.142822 MiB 00:04:56.899 size: 1.000366 MiB name: RG_ring_0_59154 00:04:56.899 size: 1.000366 MiB name: RG_ring_1_59154 00:04:56.899 size: 1.000366 MiB name: RG_ring_4_59154 00:04:56.899 size: 1.000366 MiB name: RG_ring_5_59154 00:04:56.899 size: 0.125366 MiB name: RG_ring_2_59154 00:04:56.899 size: 0.015991 MiB name: RG_ring_3_59154 00:04:56.899 end memzones------- 00:04:56.899 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.899 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:04:56.899 list of free elements. size: 16.780151 MiB 00:04:56.899 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:56.899 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:56.899 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:56.899 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:56.899 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:56.899 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:56.899 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:56.899 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:56.899 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:56.899 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:56.899 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:56.899 element at address: 0x20001b400000 with size: 0.561218 MiB 00:04:56.899 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:56.899 element at address: 0x200019600000 with size: 0.488220 MiB 00:04:56.899 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:56.899 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:56.899 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:56.899 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:56.899 list of standard malloc elements. size: 199.288940 MiB 00:04:56.899 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:56.899 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:56.899 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:56.899 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:56.899 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:56.899 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:56.899 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:56.899 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:56.899 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:56.899 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:56.899 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:56.899 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:56.899 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:56.899 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:56.900 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:56.900 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:56.901 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:56.901 list of memzone associated elements. size: 607.930908 MiB 00:04:56.901 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:56.901 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.901 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:56.901 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.901 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:56.901 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59154_0 00:04:56.901 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:56.901 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59154_0 00:04:56.901 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:56.901 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59154_0 00:04:56.901 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:56.901 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.901 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:56.901 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.901 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:56.901 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59154_0 00:04:56.901 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:56.901 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59154 00:04:56.901 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:56.901 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59154 00:04:56.901 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:56.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.901 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:56.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.901 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:56.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.901 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:56.901 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.901 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:56.901 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59154 00:04:56.901 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:56.901 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59154 00:04:56.901 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:56.901 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59154 00:04:56.901 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:56.901 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59154 00:04:56.901 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:56.901 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59154 00:04:56.901 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:56.901 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59154 00:04:56.901 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:56.901 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.901 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:56.901 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.901 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:56.901 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.901 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:56.901 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59154 00:04:56.901 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:56.901 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59154 00:04:56.901 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:56.901 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.901 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:56.901 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.901 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:56.901 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59154 00:04:56.901 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:56.901 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.901 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:56.901 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59154 00:04:56.901 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:56.901 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59154 00:04:56.901 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:56.901 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59154 00:04:56.901 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:56.901 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.901 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.901 09:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59154 00:04:56.901 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59154 ']' 00:04:56.901 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59154 00:04:56.901 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:56.901 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.901 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59154 00:04:57.160 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.160 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.160 killing process with pid 59154 00:04:57.160 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59154' 00:04:57.160 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59154 00:04:57.160 09:18:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59154 00:04:59.699 00:04:59.699 real 0m4.033s 00:04:59.699 user 0m3.905s 00:04:59.699 sys 0m0.620s 00:04:59.699 09:18:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.699 09:18:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.699 ************************************ 00:04:59.699 END TEST dpdk_mem_utility 00:04:59.699 ************************************ 00:04:59.699 09:18:33 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.699 09:18:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.699 09:18:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.699 09:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.699 ************************************ 00:04:59.699 START TEST event 00:04:59.699 ************************************ 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.699 * Looking for test storage... 00:04:59.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.699 09:18:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.699 09:18:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.699 09:18:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.699 09:18:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.699 09:18:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.699 09:18:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.699 09:18:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.699 09:18:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.699 09:18:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.699 09:18:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.699 09:18:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.699 09:18:33 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.699 09:18:33 event -- scripts/common.sh@345 -- # : 1 00:04:59.699 09:18:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.699 09:18:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.699 09:18:33 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.699 09:18:33 event -- scripts/common.sh@353 -- # local d=1 00:04:59.699 09:18:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.699 09:18:33 event -- scripts/common.sh@355 -- # echo 1 00:04:59.699 09:18:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.699 09:18:33 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.699 09:18:33 event -- scripts/common.sh@353 -- # local d=2 00:04:59.699 09:18:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.699 09:18:33 event -- scripts/common.sh@355 -- # echo 2 00:04:59.699 09:18:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.699 09:18:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.699 09:18:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.699 09:18:33 event -- scripts/common.sh@368 -- # return 0 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.699 --rc genhtml_branch_coverage=1 00:04:59.699 --rc genhtml_function_coverage=1 00:04:59.699 --rc genhtml_legend=1 00:04:59.699 --rc geninfo_all_blocks=1 00:04:59.699 --rc geninfo_unexecuted_blocks=1 00:04:59.699 00:04:59.699 ' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.699 --rc genhtml_branch_coverage=1 00:04:59.699 --rc genhtml_function_coverage=1 00:04:59.699 --rc genhtml_legend=1 00:04:59.699 --rc geninfo_all_blocks=1 00:04:59.699 --rc geninfo_unexecuted_blocks=1 00:04:59.699 00:04:59.699 ' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.699 --rc genhtml_branch_coverage=1 00:04:59.699 --rc genhtml_function_coverage=1 00:04:59.699 --rc genhtml_legend=1 00:04:59.699 --rc geninfo_all_blocks=1 00:04:59.699 --rc geninfo_unexecuted_blocks=1 00:04:59.699 00:04:59.699 ' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.699 --rc genhtml_branch_coverage=1 00:04:59.699 --rc genhtml_function_coverage=1 00:04:59.699 --rc genhtml_legend=1 00:04:59.699 --rc geninfo_all_blocks=1 00:04:59.699 --rc geninfo_unexecuted_blocks=1 00:04:59.699 00:04:59.699 ' 00:04:59.699 09:18:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:59.699 09:18:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.699 09:18:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:59.699 09:18:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.699 09:18:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.699 ************************************ 00:04:59.699 START TEST event_perf 00:04:59.699 ************************************ 00:04:59.699 09:18:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.699 Running I/O for 1 seconds...[2024-12-12 09:18:33.625262] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:59.699 [2024-12-12 09:18:33.625364] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:04:59.959 [2024-12-12 09:18:33.781585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.959 Running I/O for 1 seconds...[2024-12-12 09:18:33.894085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.959 [2024-12-12 09:18:33.894239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.959 [2024-12-12 09:18:33.894387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.959 [2024-12-12 09:18:33.894425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.364 00:05:01.364 lcore 0: 105739 00:05:01.364 lcore 1: 105737 00:05:01.364 lcore 2: 105733 00:05:01.364 lcore 3: 105736 00:05:01.364 done. 00:05:01.364 00:05:01.364 real 0m1.564s 00:05:01.364 user 0m4.330s 00:05:01.364 sys 0m0.111s 00:05:01.364 09:18:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.364 09:18:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.364 ************************************ 00:05:01.364 END TEST event_perf 00:05:01.364 ************************************ 00:05:01.364 09:18:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.364 09:18:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:01.364 09:18:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.364 09:18:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.364 ************************************ 00:05:01.364 START TEST event_reactor 00:05:01.364 ************************************ 00:05:01.364 09:18:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.364 [2024-12-12 09:18:35.259852] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:01.364 [2024-12-12 09:18:35.259969] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:05:01.623 [2024-12-12 09:18:35.434763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.623 [2024-12-12 09:18:35.548058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.003 test_start 00:05:03.003 oneshot 00:05:03.003 tick 100 00:05:03.003 tick 100 00:05:03.003 tick 250 00:05:03.003 tick 100 00:05:03.003 tick 100 00:05:03.003 tick 100 00:05:03.003 tick 250 00:05:03.003 tick 500 00:05:03.003 tick 100 00:05:03.003 tick 100 00:05:03.003 tick 250 00:05:03.003 tick 100 00:05:03.003 tick 100 00:05:03.003 test_end 00:05:03.003 00:05:03.003 real 0m1.564s 00:05:03.003 user 0m1.356s 00:05:03.003 sys 0m0.100s 00:05:03.003 09:18:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.003 09:18:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:03.003 ************************************ 00:05:03.003 END TEST event_reactor 00:05:03.003 ************************************ 00:05:03.003 09:18:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.003 09:18:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:03.003 09:18:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.003 09:18:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.003 ************************************ 00:05:03.003 START TEST event_reactor_perf 00:05:03.003 ************************************ 00:05:03.003 09:18:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.003 [2024-12-12 09:18:36.888589] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:03.003 [2024-12-12 09:18:36.888691] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:05:03.262 [2024-12-12 09:18:37.063459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.262 [2024-12-12 09:18:37.178048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.642 test_start 00:05:04.642 test_end 00:05:04.642 Performance: 363951 events per second 00:05:04.642 00:05:04.642 real 0m1.579s 00:05:04.642 user 0m1.372s 00:05:04.642 sys 0m0.099s 00:05:04.642 09:18:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.642 09:18:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.642 ************************************ 00:05:04.642 END TEST event_reactor_perf 00:05:04.642 ************************************ 00:05:04.642 09:18:38 event -- event/event.sh@49 -- # uname -s 00:05:04.642 09:18:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.642 09:18:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.642 09:18:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.642 09:18:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.642 09:18:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.642 ************************************ 00:05:04.642 START TEST event_scheduler 00:05:04.642 ************************************ 00:05:04.642 09:18:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.642 * Looking for test storage... 00:05:04.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:04.642 09:18:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.642 09:18:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.642 09:18:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.901 09:18:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:04.901 09:18:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.902 09:18:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.902 --rc genhtml_branch_coverage=1 00:05:04.902 --rc genhtml_function_coverage=1 00:05:04.902 --rc genhtml_legend=1 00:05:04.902 --rc geninfo_all_blocks=1 00:05:04.902 --rc geninfo_unexecuted_blocks=1 00:05:04.902 00:05:04.902 ' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.902 --rc genhtml_branch_coverage=1 00:05:04.902 --rc genhtml_function_coverage=1 00:05:04.902 --rc genhtml_legend=1 00:05:04.902 --rc geninfo_all_blocks=1 00:05:04.902 --rc geninfo_unexecuted_blocks=1 00:05:04.902 00:05:04.902 ' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.902 --rc genhtml_branch_coverage=1 00:05:04.902 --rc genhtml_function_coverage=1 00:05:04.902 --rc genhtml_legend=1 00:05:04.902 --rc geninfo_all_blocks=1 00:05:04.902 --rc geninfo_unexecuted_blocks=1 00:05:04.902 00:05:04.902 ' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.902 --rc genhtml_branch_coverage=1 00:05:04.902 --rc genhtml_function_coverage=1 00:05:04.902 --rc genhtml_legend=1 00:05:04.902 --rc geninfo_all_blocks=1 00:05:04.902 --rc geninfo_unexecuted_blocks=1 00:05:04.902 00:05:04.902 ' 00:05:04.902 09:18:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.902 09:18:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59414 00:05:04.902 09:18:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.902 09:18:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.902 09:18:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59414 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59414 ']' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.902 09:18:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.902 [2024-12-12 09:18:38.813971] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:04.902 [2024-12-12 09:18:38.814087] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:05:05.167 [2024-12-12 09:18:38.990524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.167 [2024-12-12 09:18:39.139477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.167 [2024-12-12 09:18:39.139708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.167 [2024-12-12 09:18:39.139862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.167 [2024-12-12 09:18:39.139903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.741 09:18:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.741 09:18:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:05.741 09:18:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.742 09:18:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.742 09:18:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.742 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.742 POWER: Cannot set governor of lcore 0 to performance 00:05:05.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.742 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.742 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.742 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:05.742 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:05.742 POWER: Unable to set Power Management Environment for lcore 0 00:05:05.742 [2024-12-12 09:18:39.652611] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:05.742 [2024-12-12 09:18:39.652638] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:05.742 [2024-12-12 09:18:39.652648] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.742 [2024-12-12 09:18:39.652669] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:05.742 [2024-12-12 09:18:39.652677] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:05.742 [2024-12-12 09:18:39.652687] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:05.742 09:18:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.742 09:18:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.742 09:18:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.742 09:18:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.000 [2024-12-12 09:18:40.022212] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.260 09:18:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.260 09:18:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.260 09:18:40 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 ************************************ 00:05:06.260 START TEST scheduler_create_thread 00:05:06.260 ************************************ 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 2 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 3 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 4 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 5 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 6 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 7 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 8 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 9 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 10 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.260 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.261 09:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.197 09:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.197 09:18:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.197 09:18:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.197 09:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.197 09:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.574 09:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.574 00:05:08.574 real 0m2.140s 00:05:08.574 user 0m0.029s 00:05:08.574 sys 0m0.008s 00:05:08.574 09:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.574 09:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.574 ************************************ 00:05:08.574 END TEST scheduler_create_thread 00:05:08.574 ************************************ 00:05:08.574 09:18:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.574 09:18:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59414 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59414 ']' 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59414 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59414 00:05:08.574 killing process with pid 59414 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59414' 00:05:08.574 09:18:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59414 00:05:08.575 09:18:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59414 00:05:08.833 [2024-12-12 09:18:42.655381] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.211 00:05:10.211 real 0m5.409s 00:05:10.211 user 0m8.768s 00:05:10.211 sys 0m0.595s 00:05:10.211 09:18:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.211 09:18:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.211 ************************************ 00:05:10.211 END TEST event_scheduler 00:05:10.211 ************************************ 00:05:10.211 09:18:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.211 09:18:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.211 09:18:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.211 09:18:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.211 09:18:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.211 ************************************ 00:05:10.211 START TEST app_repeat 00:05:10.211 ************************************ 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59520 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.211 Process app_repeat pid: 59520 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59520' 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.211 spdk_app_start Round 0 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.211 09:18:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59520 /var/tmp/spdk-nbd.sock 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59520 ']' 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.211 09:18:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.211 [2024-12-12 09:18:44.043203] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:10.211 [2024-12-12 09:18:44.043306] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59520 ] 00:05:10.212 [2024-12-12 09:18:44.219073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.471 [2024-12-12 09:18:44.336386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.471 [2024-12-12 09:18:44.336414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.038 09:18:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.038 09:18:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.038 09:18:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.301 Malloc0 00:05:11.301 09:18:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.562 Malloc1 00:05:11.562 09:18:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.562 09:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.822 /dev/nbd0 00:05:11.822 09:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.822 09:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.822 1+0 records in 00:05:11.822 1+0 records out 00:05:11.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271761 s, 15.1 MB/s 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.822 09:18:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.822 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.822 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.822 09:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.082 /dev/nbd1 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.082 1+0 records in 00:05:12.082 1+0 records out 00:05:12.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374095 s, 10.9 MB/s 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.082 09:18:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.082 09:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.342 { 00:05:12.342 "nbd_device": "/dev/nbd0", 00:05:12.342 "bdev_name": "Malloc0" 00:05:12.342 }, 00:05:12.342 { 00:05:12.342 "nbd_device": "/dev/nbd1", 00:05:12.342 "bdev_name": "Malloc1" 00:05:12.342 } 00:05:12.342 ]' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.342 { 00:05:12.342 "nbd_device": "/dev/nbd0", 00:05:12.342 "bdev_name": "Malloc0" 00:05:12.342 }, 00:05:12.342 { 00:05:12.342 "nbd_device": "/dev/nbd1", 00:05:12.342 "bdev_name": "Malloc1" 00:05:12.342 } 00:05:12.342 ]' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.342 /dev/nbd1' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.342 /dev/nbd1' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.342 09:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.343 256+0 records in 00:05:12.343 256+0 records out 00:05:12.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137118 s, 76.5 MB/s 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.343 256+0 records in 00:05:12.343 256+0 records out 00:05:12.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241765 s, 43.4 MB/s 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.343 256+0 records in 00:05:12.343 256+0 records out 00:05:12.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264845 s, 39.6 MB/s 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.343 09:18:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.613 09:18:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.891 09:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.892 09:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.151 09:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.151 09:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.151 09:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.151 09:18:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.151 09:18:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.411 09:18:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.793 [2024-12-12 09:18:48.558357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.793 [2024-12-12 09:18:48.672272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.793 [2024-12-12 09:18:48.672276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.053 [2024-12-12 09:18:48.863471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.053 [2024-12-12 09:18:48.863585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.436 09:18:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.436 09:18:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.436 spdk_app_start Round 1 00:05:16.436 09:18:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59520 /var/tmp/spdk-nbd.sock 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59520 ']' 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.436 09:18:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.697 09:18:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.697 09:18:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:16.697 09:18:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.957 Malloc0 00:05:16.957 09:18:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.217 Malloc1 00:05:17.217 09:18:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.217 09:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.478 /dev/nbd0 00:05:17.478 09:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.478 09:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.478 09:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.478 09:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.478 09:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.479 1+0 records in 00:05:17.479 1+0 records out 00:05:17.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243052 s, 16.9 MB/s 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.479 09:18:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.479 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.479 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.479 09:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.739 /dev/nbd1 00:05:17.739 09:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.739 09:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.739 1+0 records in 00:05:17.739 1+0 records out 00:05:17.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460112 s, 8.9 MB/s 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.739 09:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.740 09:18:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.740 09:18:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.740 09:18:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.740 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.740 09:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.740 09:18:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.740 09:18:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.740 09:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.001 { 00:05:18.001 "nbd_device": "/dev/nbd0", 00:05:18.001 "bdev_name": "Malloc0" 00:05:18.001 }, 00:05:18.001 { 00:05:18.001 "nbd_device": "/dev/nbd1", 00:05:18.001 "bdev_name": "Malloc1" 00:05:18.001 } 00:05:18.001 ]' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.001 { 00:05:18.001 "nbd_device": "/dev/nbd0", 00:05:18.001 "bdev_name": "Malloc0" 00:05:18.001 }, 00:05:18.001 { 00:05:18.001 "nbd_device": "/dev/nbd1", 00:05:18.001 "bdev_name": "Malloc1" 00:05:18.001 } 00:05:18.001 ]' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.001 /dev/nbd1' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.001 /dev/nbd1' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.001 256+0 records in 00:05:18.001 256+0 records out 00:05:18.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466608 s, 225 MB/s 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.001 256+0 records in 00:05:18.001 256+0 records out 00:05:18.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253311 s, 41.4 MB/s 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.001 256+0 records in 00:05:18.001 256+0 records out 00:05:18.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256348 s, 40.9 MB/s 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.001 09:18:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.262 09:18:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.522 09:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.783 09:18:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.783 09:18:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.354 09:18:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.374 [2024-12-12 09:18:54.289215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.374 [2024-12-12 09:18:54.394589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.374 [2024-12-12 09:18:54.394617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.633 [2024-12-12 09:18:54.585511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.633 [2024-12-12 09:18:54.585588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.543 spdk_app_start Round 2 00:05:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.543 09:18:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.543 09:18:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.543 09:18:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59520 /var/tmp/spdk-nbd.sock 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59520 ']' 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.543 09:18:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.543 09:18:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.803 Malloc0 00:05:22.803 09:18:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.069 Malloc1 00:05:23.069 09:18:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.069 09:18:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.329 /dev/nbd0 00:05:23.329 09:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.329 09:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.329 1+0 records in 00:05:23.329 1+0 records out 00:05:23.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283529 s, 14.4 MB/s 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.329 09:18:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.329 09:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.329 09:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.329 09:18:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.589 /dev/nbd1 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.589 1+0 records in 00:05:23.589 1+0 records out 00:05:23.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407187 s, 10.1 MB/s 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.589 09:18:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.589 09:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.850 { 00:05:23.850 "nbd_device": "/dev/nbd0", 00:05:23.850 "bdev_name": "Malloc0" 00:05:23.850 }, 00:05:23.850 { 00:05:23.850 "nbd_device": "/dev/nbd1", 00:05:23.850 "bdev_name": "Malloc1" 00:05:23.850 } 00:05:23.850 ]' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.850 { 00:05:23.850 "nbd_device": "/dev/nbd0", 00:05:23.850 "bdev_name": "Malloc0" 00:05:23.850 }, 00:05:23.850 { 00:05:23.850 "nbd_device": "/dev/nbd1", 00:05:23.850 "bdev_name": "Malloc1" 00:05:23.850 } 00:05:23.850 ]' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.850 /dev/nbd1' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.850 /dev/nbd1' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.850 256+0 records in 00:05:23.850 256+0 records out 00:05:23.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00344019 s, 305 MB/s 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.850 256+0 records in 00:05:23.850 256+0 records out 00:05:23.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245811 s, 42.7 MB/s 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.850 256+0 records in 00:05:23.850 256+0 records out 00:05:23.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251879 s, 41.6 MB/s 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.850 09:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.109 09:18:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.109 09:18:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.369 09:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.629 09:18:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.629 09:18:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.197 09:18:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.134 [2024-12-12 09:19:00.078295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.394 [2024-12-12 09:19:00.184979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.394 [2024-12-12 09:19:00.185009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.394 [2024-12-12 09:19:00.371859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.394 [2024-12-12 09:19:00.371946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.300 09:19:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59520 /var/tmp/spdk-nbd.sock 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59520 ']' 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.300 09:19:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.300 09:19:02 event.app_repeat -- event/event.sh@39 -- # killprocess 59520 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59520 ']' 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59520 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59520 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.300 09:19:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59520' 00:05:28.301 killing process with pid 59520 00:05:28.301 09:19:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59520 00:05:28.301 09:19:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59520 00:05:29.240 spdk_app_start is called in Round 0. 00:05:29.240 Shutdown signal received, stop current app iteration 00:05:29.240 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:05:29.240 spdk_app_start is called in Round 1. 00:05:29.240 Shutdown signal received, stop current app iteration 00:05:29.240 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:05:29.240 spdk_app_start is called in Round 2. 00:05:29.240 Shutdown signal received, stop current app iteration 00:05:29.240 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:05:29.240 spdk_app_start is called in Round 3. 00:05:29.240 Shutdown signal received, stop current app iteration 00:05:29.240 09:19:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.240 09:19:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.240 00:05:29.240 real 0m19.258s 00:05:29.240 user 0m41.154s 00:05:29.240 sys 0m2.854s 00:05:29.240 09:19:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.240 09:19:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.240 ************************************ 00:05:29.240 END TEST app_repeat 00:05:29.240 ************************************ 00:05:29.499 09:19:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.499 09:19:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.499 09:19:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.499 09:19:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.499 09:19:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.499 ************************************ 00:05:29.499 START TEST cpu_locks 00:05:29.499 ************************************ 00:05:29.499 09:19:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.499 * Looking for test storage... 00:05:29.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.499 09:19:03 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.499 09:19:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.499 09:19:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.499 09:19:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.499 09:19:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.499 09:19:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.500 09:19:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.759 09:19:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.759 --rc genhtml_branch_coverage=1 00:05:29.759 --rc genhtml_function_coverage=1 00:05:29.759 --rc genhtml_legend=1 00:05:29.759 --rc geninfo_all_blocks=1 00:05:29.759 --rc geninfo_unexecuted_blocks=1 00:05:29.759 00:05:29.759 ' 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.759 --rc genhtml_branch_coverage=1 00:05:29.759 --rc genhtml_function_coverage=1 00:05:29.759 --rc genhtml_legend=1 00:05:29.759 --rc geninfo_all_blocks=1 00:05:29.759 --rc geninfo_unexecuted_blocks=1 00:05:29.759 00:05:29.759 ' 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.759 --rc genhtml_branch_coverage=1 00:05:29.759 --rc genhtml_function_coverage=1 00:05:29.759 --rc genhtml_legend=1 00:05:29.759 --rc geninfo_all_blocks=1 00:05:29.759 --rc geninfo_unexecuted_blocks=1 00:05:29.759 00:05:29.759 ' 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.759 --rc genhtml_branch_coverage=1 00:05:29.759 --rc genhtml_function_coverage=1 00:05:29.759 --rc genhtml_legend=1 00:05:29.759 --rc geninfo_all_blocks=1 00:05:29.759 --rc geninfo_unexecuted_blocks=1 00:05:29.759 00:05:29.759 ' 00:05:29.759 09:19:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.759 09:19:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.759 09:19:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.759 09:19:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.759 09:19:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.759 ************************************ 00:05:29.759 START TEST default_locks 00:05:29.759 ************************************ 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59967 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59967 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.759 09:19:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.759 [2024-12-12 09:19:03.658870] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:29.760 [2024-12-12 09:19:03.659123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:05:30.019 [2024-12-12 09:19:03.837074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.019 [2024-12-12 09:19:03.951346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.964 09:19:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.964 09:19:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:30.964 09:19:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59967 00:05:30.964 09:19:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59967 00:05:30.964 09:19:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.223 09:19:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59967 00:05:31.223 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59967 ']' 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59967 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59967 00:05:31.224 killing process with pid 59967 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59967' 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59967 00:05:31.224 09:19:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59967 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59967 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59967 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59967 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.767 ERROR: process (pid: 59967) is no longer running 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59967) - No such process 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.767 00:05:33.767 real 0m3.860s 00:05:33.767 user 0m3.761s 00:05:33.767 sys 0m0.574s 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.767 09:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 ************************************ 00:05:33.767 END TEST default_locks 00:05:33.767 ************************************ 00:05:33.767 09:19:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.767 09:19:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.767 09:19:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.767 09:19:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 ************************************ 00:05:33.767 START TEST default_locks_via_rpc 00:05:33.767 ************************************ 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60037 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60037 00:05:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60037 ']' 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.767 09:19:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 [2024-12-12 09:19:07.577234] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:33.767 [2024-12-12 09:19:07.577422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:05:33.767 [2024-12-12 09:19:07.751212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.027 [2024-12-12 09:19:07.860777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60037 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60037 00:05:34.966 09:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60037 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60037 ']' 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60037 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60037 00:05:35.225 killing process with pid 60037 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60037' 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60037 00:05:35.225 09:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60037 00:05:37.763 00:05:37.763 real 0m4.081s 00:05:37.763 user 0m4.016s 00:05:37.763 sys 0m0.700s 00:05:37.763 09:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.763 ************************************ 00:05:37.763 END TEST default_locks_via_rpc 00:05:37.763 ************************************ 00:05:37.763 09:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.763 09:19:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:37.763 09:19:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.763 09:19:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.763 09:19:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.763 ************************************ 00:05:37.763 START TEST non_locking_app_on_locked_coremask 00:05:37.763 ************************************ 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60111 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60111 /var/tmp/spdk.sock 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60111 ']' 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.763 09:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.763 [2024-12-12 09:19:11.721517] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:37.763 [2024-12-12 09:19:11.721634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:05:38.022 [2024-12-12 09:19:11.895913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.022 [2024-12-12 09:19:12.007806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60127 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60127 /var/tmp/spdk2.sock 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60127 ']' 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.960 09:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.960 [2024-12-12 09:19:12.943427] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:38.960 [2024-12-12 09:19:12.943921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 00:05:39.219 [2024-12-12 09:19:13.110722] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.219 [2024-12-12 09:19:13.110771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.483 [2024-12-12 09:19:13.335471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.025 09:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.025 09:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.025 09:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60111 00:05:42.025 09:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.025 09:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60111 ']' 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.593 killing process with pid 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60111' 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60111 00:05:42.593 09:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60111 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60127 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60127 ']' 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60127 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60127 00:05:47.871 killing process with pid 60127 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60127' 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60127 00:05:47.871 09:19:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60127 00:05:49.799 00:05:49.799 real 0m11.788s 00:05:49.799 user 0m12.028s 00:05:49.799 sys 0m1.398s 00:05:49.799 09:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.799 09:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.799 ************************************ 00:05:49.799 END TEST non_locking_app_on_locked_coremask 00:05:49.799 ************************************ 00:05:49.799 09:19:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.799 09:19:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.799 09:19:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.799 09:19:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.799 ************************************ 00:05:49.799 START TEST locking_app_on_unlocked_coremask 00:05:49.799 ************************************ 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60277 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60277 /var/tmp/spdk.sock 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60277 ']' 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.799 09:19:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.799 [2024-12-12 09:19:23.580362] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:49.799 [2024-12-12 09:19:23.580531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60277 ] 00:05:49.799 [2024-12-12 09:19:23.754952] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.800 [2024-12-12 09:19:23.755083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.059 [2024-12-12 09:19:23.867912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60304 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60304 /var/tmp/spdk2.sock 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60304 ']' 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.998 09:19:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 [2024-12-12 09:19:24.847194] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:50.998 [2024-12-12 09:19:24.847386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:05:50.998 [2024-12-12 09:19:25.018217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.565 [2024-12-12 09:19:25.297810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.473 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.473 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.473 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60304 00:05:53.473 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60304 00:05:53.473 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60277 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60277 ']' 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60277 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60277 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.041 killing process with pid 60277 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60277' 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60277 00:05:54.041 09:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60277 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60304 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60304 ']' 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60304 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60304 00:05:59.316 killing process with pid 60304 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60304' 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60304 00:05:59.316 09:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60304 00:06:01.853 ************************************ 00:06:01.853 END TEST locking_app_on_unlocked_coremask 00:06:01.853 ************************************ 00:06:01.853 00:06:01.853 real 0m12.221s 00:06:01.853 user 0m12.198s 00:06:01.853 sys 0m1.444s 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.853 09:19:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.853 09:19:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.853 09:19:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.853 09:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.853 ************************************ 00:06:01.853 START TEST locking_app_on_locked_coremask 00:06:01.853 ************************************ 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60457 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60457 /var/tmp/spdk.sock 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60457 ']' 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.853 09:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.853 [2024-12-12 09:19:35.869807] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:01.853 [2024-12-12 09:19:35.869923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:06:02.112 [2024-12-12 09:19:36.027255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.371 [2024-12-12 09:19:36.167468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60479 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60479 /var/tmp/spdk2.sock 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60479 /var/tmp/spdk2.sock 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:03.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60479 /var/tmp/spdk2.sock 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60479 ']' 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.308 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.308 [2024-12-12 09:19:37.278845] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:03.308 [2024-12-12 09:19:37.279093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60479 ] 00:06:03.581 [2024-12-12 09:19:37.451206] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60457 has claimed it. 00:06:03.581 [2024-12-12 09:19:37.451285] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.176 ERROR: process (pid: 60479) is no longer running 00:06:04.176 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60479) - No such process 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60457 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60457 00:06:04.176 09:19:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60457 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60457 ']' 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60457 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.176 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60457 00:06:04.436 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.436 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.436 killing process with pid 60457 00:06:04.436 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60457' 00:06:04.436 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60457 00:06:04.436 09:19:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60457 00:06:06.974 ************************************ 00:06:06.974 END TEST locking_app_on_locked_coremask 00:06:06.974 ************************************ 00:06:06.974 00:06:06.974 real 0m5.026s 00:06:06.974 user 0m4.988s 00:06:06.974 sys 0m0.910s 00:06:06.974 09:19:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.974 09:19:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.974 09:19:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.974 09:19:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.974 09:19:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.974 09:19:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.974 ************************************ 00:06:06.974 START TEST locking_overlapped_coremask 00:06:06.974 ************************************ 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60548 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60548 /var/tmp/spdk.sock 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60548 ']' 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.974 09:19:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.974 [2024-12-12 09:19:40.961280] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:06.974 [2024-12-12 09:19:40.961485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60548 ] 00:06:07.233 [2024-12-12 09:19:41.138602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.492 [2024-12-12 09:19:41.282124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.492 [2024-12-12 09:19:41.282330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.492 [2024-12-12 09:19:41.282288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60572 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60572 /var/tmp/spdk2.sock 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60572 /var/tmp/spdk2.sock 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60572 /var/tmp/spdk2.sock 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60572 ']' 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.428 09:19:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.428 [2024-12-12 09:19:42.426605] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:08.428 [2024-12-12 09:19:42.426725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:06:08.687 [2024-12-12 09:19:42.600786] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60548 has claimed it. 00:06:08.687 [2024-12-12 09:19:42.600893] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.256 ERROR: process (pid: 60572) is no longer running 00:06:09.256 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60572) - No such process 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60548 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60548 ']' 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60548 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60548 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60548' 00:06:09.256 killing process with pid 60548 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60548 00:06:09.256 09:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60548 00:06:11.888 00:06:11.888 real 0m4.867s 00:06:11.888 user 0m13.038s 00:06:11.888 sys 0m0.779s 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 ************************************ 00:06:11.888 END TEST locking_overlapped_coremask 00:06:11.888 ************************************ 00:06:11.888 09:19:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.888 09:19:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.888 09:19:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.888 09:19:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 ************************************ 00:06:11.888 START TEST locking_overlapped_coremask_via_rpc 00:06:11.888 ************************************ 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60636 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60636 /var/tmp/spdk.sock 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60636 ']' 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.888 09:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 [2024-12-12 09:19:45.900861] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:11.888 [2024-12-12 09:19:45.901313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60636 ] 00:06:12.147 [2024-12-12 09:19:46.076445] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.147 [2024-12-12 09:19:46.076511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.406 [2024-12-12 09:19:46.219839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.406 [2024-12-12 09:19:46.220001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.406 [2024-12-12 09:19:46.220082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60665 00:06:13.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60665 /var/tmp/spdk2.sock 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60665 ']' 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.343 09:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.602 [2024-12-12 09:19:47.389661] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:13.602 [2024-12-12 09:19:47.389800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60665 ] 00:06:13.602 [2024-12-12 09:19:47.563742] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.602 [2024-12-12 09:19:47.563808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.861 [2024-12-12 09:19:47.856645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.861 [2024-12-12 09:19:47.856806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.861 [2024-12-12 09:19:47.856870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 [2024-12-12 09:19:49.958149] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60636 has claimed it. 00:06:16.400 request: 00:06:16.400 { 00:06:16.400 "method": "framework_enable_cpumask_locks", 00:06:16.400 "req_id": 1 00:06:16.400 } 00:06:16.400 Got JSON-RPC error response 00:06:16.400 response: 00:06:16.400 { 00:06:16.400 "code": -32603, 00:06:16.400 "message": "Failed to claim CPU core: 2" 00:06:16.400 } 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60636 /var/tmp/spdk.sock 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60636 ']' 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.400 09:19:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60665 /var/tmp/spdk2.sock 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60665 ']' 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.400 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.659 00:06:16.659 real 0m4.661s 00:06:16.659 user 0m1.292s 00:06:16.659 sys 0m0.246s 00:06:16.659 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.660 09:19:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.660 ************************************ 00:06:16.660 END TEST locking_overlapped_coremask_via_rpc 00:06:16.660 ************************************ 00:06:16.660 09:19:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.660 09:19:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60636 ]] 00:06:16.660 09:19:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60636 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60636 ']' 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60636 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60636 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60636' 00:06:16.660 killing process with pid 60636 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60636 00:06:16.660 09:19:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60636 00:06:19.973 09:19:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60665 ]] 00:06:19.973 09:19:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60665 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60665 ']' 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60665 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60665 00:06:19.973 killing process with pid 60665 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60665' 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60665 00:06:19.973 09:19:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60665 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60636 ]] 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60636 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60636 ']' 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60636 00:06:22.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60636) - No such process 00:06:22.509 Process with pid 60636 is not found 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60636 is not found' 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60665 ]] 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60665 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60665 ']' 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60665 00:06:22.509 Process with pid 60665 is not found 00:06:22.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60665) - No such process 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60665 is not found' 00:06:22.509 09:19:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.509 00:06:22.509 real 0m52.608s 00:06:22.509 user 1m29.722s 00:06:22.509 sys 0m7.691s 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.509 ************************************ 00:06:22.509 END TEST cpu_locks 00:06:22.509 ************************************ 00:06:22.509 09:19:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.509 ************************************ 00:06:22.509 END TEST event 00:06:22.509 ************************************ 00:06:22.509 00:06:22.509 real 1m22.628s 00:06:22.509 user 2m26.934s 00:06:22.509 sys 0m11.882s 00:06:22.509 09:19:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.509 09:19:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.509 09:19:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.509 09:19:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.509 09:19:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.509 09:19:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.509 ************************************ 00:06:22.509 START TEST thread 00:06:22.509 ************************************ 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.509 * Looking for test storage... 00:06:22.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.509 09:19:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.509 09:19:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.509 09:19:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.509 09:19:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.509 09:19:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.509 09:19:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.509 09:19:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.509 09:19:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.509 09:19:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.509 09:19:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.509 09:19:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.509 09:19:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:22.509 09:19:56 thread -- scripts/common.sh@345 -- # : 1 00:06:22.509 09:19:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.509 09:19:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.509 09:19:56 thread -- scripts/common.sh@365 -- # decimal 1 00:06:22.509 09:19:56 thread -- scripts/common.sh@353 -- # local d=1 00:06:22.509 09:19:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.509 09:19:56 thread -- scripts/common.sh@355 -- # echo 1 00:06:22.509 09:19:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.509 09:19:56 thread -- scripts/common.sh@366 -- # decimal 2 00:06:22.509 09:19:56 thread -- scripts/common.sh@353 -- # local d=2 00:06:22.509 09:19:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.509 09:19:56 thread -- scripts/common.sh@355 -- # echo 2 00:06:22.509 09:19:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.509 09:19:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.509 09:19:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.509 09:19:56 thread -- scripts/common.sh@368 -- # return 0 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.509 --rc genhtml_branch_coverage=1 00:06:22.509 --rc genhtml_function_coverage=1 00:06:22.509 --rc genhtml_legend=1 00:06:22.509 --rc geninfo_all_blocks=1 00:06:22.509 --rc geninfo_unexecuted_blocks=1 00:06:22.509 00:06:22.509 ' 00:06:22.509 09:19:56 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.510 --rc genhtml_branch_coverage=1 00:06:22.510 --rc genhtml_function_coverage=1 00:06:22.510 --rc genhtml_legend=1 00:06:22.510 --rc geninfo_all_blocks=1 00:06:22.510 --rc geninfo_unexecuted_blocks=1 00:06:22.510 00:06:22.510 ' 00:06:22.510 09:19:56 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.510 --rc genhtml_branch_coverage=1 00:06:22.510 --rc genhtml_function_coverage=1 00:06:22.510 --rc genhtml_legend=1 00:06:22.510 --rc geninfo_all_blocks=1 00:06:22.510 --rc geninfo_unexecuted_blocks=1 00:06:22.510 00:06:22.510 ' 00:06:22.510 09:19:56 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.510 --rc genhtml_branch_coverage=1 00:06:22.510 --rc genhtml_function_coverage=1 00:06:22.510 --rc genhtml_legend=1 00:06:22.510 --rc geninfo_all_blocks=1 00:06:22.510 --rc geninfo_unexecuted_blocks=1 00:06:22.510 00:06:22.510 ' 00:06:22.510 09:19:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.510 09:19:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.510 09:19:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.510 09:19:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.510 ************************************ 00:06:22.510 START TEST thread_poller_perf 00:06:22.510 ************************************ 00:06:22.510 09:19:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.510 [2024-12-12 09:19:56.328204] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:22.510 [2024-12-12 09:19:56.328312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:06:22.510 [2024-12-12 09:19:56.503525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.769 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.769 [2024-12-12 09:19:56.635083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.151 [2024-12-12T09:19:58.174Z] ====================================== 00:06:24.151 [2024-12-12T09:19:58.174Z] busy:2298009452 (cyc) 00:06:24.151 [2024-12-12T09:19:58.174Z] total_run_count: 421000 00:06:24.151 [2024-12-12T09:19:58.174Z] tsc_hz: 2290000000 (cyc) 00:06:24.151 [2024-12-12T09:19:58.174Z] ====================================== 00:06:24.151 [2024-12-12T09:19:58.174Z] poller_cost: 5458 (cyc), 2383 (nsec) 00:06:24.151 00:06:24.151 real 0m1.602s 00:06:24.151 user 0m1.380s 00:06:24.151 sys 0m0.115s 00:06:24.151 09:19:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.151 ************************************ 00:06:24.151 END TEST thread_poller_perf 00:06:24.151 ************************************ 00:06:24.151 09:19:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.151 09:19:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.151 09:19:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.151 09:19:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.151 09:19:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.151 ************************************ 00:06:24.151 START TEST thread_poller_perf 00:06:24.151 ************************************ 00:06:24.151 09:19:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.151 [2024-12-12 09:19:58.004772] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:24.151 [2024-12-12 09:19:58.005018] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:06:24.411 [2024-12-12 09:19:58.186166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.411 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.411 [2024-12-12 09:19:58.324792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.791 [2024-12-12T09:19:59.814Z] ====================================== 00:06:25.791 [2024-12-12T09:19:59.814Z] busy:2294176390 (cyc) 00:06:25.791 [2024-12-12T09:19:59.814Z] total_run_count: 5001000 00:06:25.791 [2024-12-12T09:19:59.814Z] tsc_hz: 2290000000 (cyc) 00:06:25.791 [2024-12-12T09:19:59.814Z] ====================================== 00:06:25.791 [2024-12-12T09:19:59.814Z] poller_cost: 458 (cyc), 200 (nsec) 00:06:25.791 00:06:25.791 real 0m1.615s 00:06:25.791 user 0m1.374s 00:06:25.791 sys 0m0.132s 00:06:25.791 09:19:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.791 09:19:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.791 ************************************ 00:06:25.791 END TEST thread_poller_perf 00:06:25.791 ************************************ 00:06:25.791 09:19:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.791 00:06:25.791 real 0m3.582s 00:06:25.791 user 0m2.915s 00:06:25.791 sys 0m0.466s 00:06:25.791 ************************************ 00:06:25.791 END TEST thread 00:06:25.791 ************************************ 00:06:25.791 09:19:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.791 09:19:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.791 09:19:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:25.791 09:19:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.791 09:19:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.791 09:19:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.791 09:19:59 -- common/autotest_common.sh@10 -- # set +x 00:06:25.791 ************************************ 00:06:25.791 START TEST app_cmdline 00:06:25.791 ************************************ 00:06:25.791 09:19:59 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.791 * Looking for test storage... 00:06:26.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.051 09:19:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.051 --rc genhtml_branch_coverage=1 00:06:26.051 --rc genhtml_function_coverage=1 00:06:26.051 --rc genhtml_legend=1 00:06:26.051 --rc geninfo_all_blocks=1 00:06:26.051 --rc geninfo_unexecuted_blocks=1 00:06:26.051 00:06:26.051 ' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.051 --rc genhtml_branch_coverage=1 00:06:26.051 --rc genhtml_function_coverage=1 00:06:26.051 --rc genhtml_legend=1 00:06:26.051 --rc geninfo_all_blocks=1 00:06:26.051 --rc geninfo_unexecuted_blocks=1 00:06:26.051 00:06:26.051 ' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.051 --rc genhtml_branch_coverage=1 00:06:26.051 --rc genhtml_function_coverage=1 00:06:26.051 --rc genhtml_legend=1 00:06:26.051 --rc geninfo_all_blocks=1 00:06:26.051 --rc geninfo_unexecuted_blocks=1 00:06:26.051 00:06:26.051 ' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.051 --rc genhtml_branch_coverage=1 00:06:26.051 --rc genhtml_function_coverage=1 00:06:26.051 --rc genhtml_legend=1 00:06:26.051 --rc geninfo_all_blocks=1 00:06:26.051 --rc geninfo_unexecuted_blocks=1 00:06:26.051 00:06:26.051 ' 00:06:26.051 09:19:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.051 09:19:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.051 09:19:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60991 00:06:26.051 09:19:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60991 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60991 ']' 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.051 09:19:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.051 [2024-12-12 09:20:00.006856] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:26.051 [2024-12-12 09:20:00.007085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:06:26.309 [2024-12-12 09:20:00.182145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.309 [2024-12-12 09:20:00.314354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.688 09:20:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.688 09:20:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:27.688 09:20:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.688 { 00:06:27.688 "version": "SPDK v25.01-pre git sha1 b9cf27559", 00:06:27.688 "fields": { 00:06:27.688 "major": 25, 00:06:27.688 "minor": 1, 00:06:27.688 "patch": 0, 00:06:27.688 "suffix": "-pre", 00:06:27.689 "commit": "b9cf27559" 00:06:27.689 } 00:06:27.689 } 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.689 09:20:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:27.689 09:20:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.948 request: 00:06:27.948 { 00:06:27.948 "method": "env_dpdk_get_mem_stats", 00:06:27.948 "req_id": 1 00:06:27.948 } 00:06:27.948 Got JSON-RPC error response 00:06:27.948 response: 00:06:27.948 { 00:06:27.948 "code": -32601, 00:06:27.948 "message": "Method not found" 00:06:27.948 } 00:06:27.948 09:20:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.948 09:20:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.949 09:20:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60991 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60991 ']' 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60991 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60991 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60991' 00:06:27.949 killing process with pid 60991 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 60991 00:06:27.949 09:20:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 60991 00:06:30.485 00:06:30.485 real 0m4.667s 00:06:30.485 user 0m4.663s 00:06:30.485 sys 0m0.804s 00:06:30.485 ************************************ 00:06:30.485 END TEST app_cmdline 00:06:30.485 ************************************ 00:06:30.485 09:20:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.485 09:20:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 09:20:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.485 09:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.485 09:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.485 09:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:30.485 ************************************ 00:06:30.485 START TEST version 00:06:30.485 ************************************ 00:06:30.485 09:20:04 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.745 * Looking for test storage... 00:06:30.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.745 09:20:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.745 09:20:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.745 09:20:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.745 09:20:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.745 09:20:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.745 09:20:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.745 09:20:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.745 09:20:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.745 09:20:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.745 09:20:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.745 09:20:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.745 09:20:04 version -- scripts/common.sh@344 -- # case "$op" in 00:06:30.745 09:20:04 version -- scripts/common.sh@345 -- # : 1 00:06:30.745 09:20:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.745 09:20:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.745 09:20:04 version -- scripts/common.sh@365 -- # decimal 1 00:06:30.745 09:20:04 version -- scripts/common.sh@353 -- # local d=1 00:06:30.745 09:20:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.745 09:20:04 version -- scripts/common.sh@355 -- # echo 1 00:06:30.745 09:20:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.745 09:20:04 version -- scripts/common.sh@366 -- # decimal 2 00:06:30.745 09:20:04 version -- scripts/common.sh@353 -- # local d=2 00:06:30.745 09:20:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.745 09:20:04 version -- scripts/common.sh@355 -- # echo 2 00:06:30.745 09:20:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.745 09:20:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.745 09:20:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.745 09:20:04 version -- scripts/common.sh@368 -- # return 0 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.745 --rc genhtml_branch_coverage=1 00:06:30.745 --rc genhtml_function_coverage=1 00:06:30.745 --rc genhtml_legend=1 00:06:30.745 --rc geninfo_all_blocks=1 00:06:30.745 --rc geninfo_unexecuted_blocks=1 00:06:30.745 00:06:30.745 ' 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.745 --rc genhtml_branch_coverage=1 00:06:30.745 --rc genhtml_function_coverage=1 00:06:30.745 --rc genhtml_legend=1 00:06:30.745 --rc geninfo_all_blocks=1 00:06:30.745 --rc geninfo_unexecuted_blocks=1 00:06:30.745 00:06:30.745 ' 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.745 --rc genhtml_branch_coverage=1 00:06:30.745 --rc genhtml_function_coverage=1 00:06:30.745 --rc genhtml_legend=1 00:06:30.745 --rc geninfo_all_blocks=1 00:06:30.745 --rc geninfo_unexecuted_blocks=1 00:06:30.745 00:06:30.745 ' 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.745 --rc genhtml_branch_coverage=1 00:06:30.745 --rc genhtml_function_coverage=1 00:06:30.745 --rc genhtml_legend=1 00:06:30.745 --rc geninfo_all_blocks=1 00:06:30.745 --rc geninfo_unexecuted_blocks=1 00:06:30.745 00:06:30.745 ' 00:06:30.745 09:20:04 version -- app/version.sh@17 -- # get_header_version major 00:06:30.745 09:20:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # cut -f2 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.745 09:20:04 version -- app/version.sh@17 -- # major=25 00:06:30.745 09:20:04 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.745 09:20:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # cut -f2 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.745 09:20:04 version -- app/version.sh@18 -- # minor=1 00:06:30.745 09:20:04 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.745 09:20:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # cut -f2 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.745 09:20:04 version -- app/version.sh@19 -- # patch=0 00:06:30.745 09:20:04 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.745 09:20:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # cut -f2 00:06:30.745 09:20:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.745 09:20:04 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.745 09:20:04 version -- app/version.sh@22 -- # version=25.1 00:06:30.745 09:20:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.745 09:20:04 version -- app/version.sh@28 -- # version=25.1rc0 00:06:30.745 09:20:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:30.745 09:20:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.745 09:20:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:30.745 09:20:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:30.745 ************************************ 00:06:30.745 END TEST version 00:06:30.745 ************************************ 00:06:30.745 00:06:30.745 real 0m0.329s 00:06:30.745 user 0m0.193s 00:06:30.745 sys 0m0.195s 00:06:30.745 09:20:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.745 09:20:04 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.005 09:20:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.005 09:20:04 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:31.005 09:20:04 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.005 09:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.005 09:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.005 09:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.005 ************************************ 00:06:31.005 START TEST bdev_raid 00:06:31.005 ************************************ 00:06:31.005 09:20:04 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.005 * Looking for test storage... 00:06:31.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.005 09:20:04 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.005 09:20:04 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.005 09:20:04 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.005 09:20:05 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.005 09:20:05 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.265 09:20:05 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.265 --rc genhtml_branch_coverage=1 00:06:31.265 --rc genhtml_function_coverage=1 00:06:31.265 --rc genhtml_legend=1 00:06:31.265 --rc geninfo_all_blocks=1 00:06:31.265 --rc geninfo_unexecuted_blocks=1 00:06:31.265 00:06:31.265 ' 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.265 --rc genhtml_branch_coverage=1 00:06:31.265 --rc genhtml_function_coverage=1 00:06:31.265 --rc genhtml_legend=1 00:06:31.265 --rc geninfo_all_blocks=1 00:06:31.265 --rc geninfo_unexecuted_blocks=1 00:06:31.265 00:06:31.265 ' 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.265 --rc genhtml_branch_coverage=1 00:06:31.265 --rc genhtml_function_coverage=1 00:06:31.265 --rc genhtml_legend=1 00:06:31.265 --rc geninfo_all_blocks=1 00:06:31.265 --rc geninfo_unexecuted_blocks=1 00:06:31.265 00:06:31.265 ' 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.265 --rc genhtml_branch_coverage=1 00:06:31.265 --rc genhtml_function_coverage=1 00:06:31.265 --rc genhtml_legend=1 00:06:31.265 --rc geninfo_all_blocks=1 00:06:31.265 --rc geninfo_unexecuted_blocks=1 00:06:31.265 00:06:31.265 ' 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.265 09:20:05 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:31.265 09:20:05 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.265 09:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.265 ************************************ 00:06:31.265 START TEST raid1_resize_data_offset_test 00:06:31.265 ************************************ 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=61184 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 61184' 00:06:31.265 Process raid pid: 61184 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 61184 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 61184 ']' 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.265 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.265 [2024-12-12 09:20:05.167643] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:31.265 [2024-12-12 09:20:05.167854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.525 [2024-12-12 09:20:05.346293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.525 [2024-12-12 09:20:05.485120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.785 [2024-12-12 09:20:05.725442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.785 [2024-12-12 09:20:05.725618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.044 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.044 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:32.044 09:20:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:32.044 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.044 09:20:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.303 malloc0 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.303 malloc1 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.303 null0 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.303 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.303 [2024-12-12 09:20:06.202664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:32.303 [2024-12-12 09:20:06.204771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:32.303 [2024-12-12 09:20:06.204831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:32.304 [2024-12-12 09:20:06.205028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.304 [2024-12-12 09:20:06.205046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:32.304 [2024-12-12 09:20:06.205326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:32.304 [2024-12-12 09:20:06.205515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.304 [2024-12-12 09:20:06.205530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:32.304 [2024-12-12 09:20:06.205694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.304 [2024-12-12 09:20:06.262542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.304 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.920 malloc2 00:06:32.920 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.920 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:32.920 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.920 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.920 [2024-12-12 09:20:06.914237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:33.200 [2024-12-12 09:20:06.933255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.200 [2024-12-12 09:20:06.935378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 61184 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 61184 ']' 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 61184 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.200 09:20:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61184 00:06:33.200 09:20:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.200 09:20:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.200 09:20:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61184' 00:06:33.200 killing process with pid 61184 00:06:33.200 09:20:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 61184 00:06:33.200 09:20:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 61184 00:06:33.200 [2024-12-12 09:20:07.023527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.200 [2024-12-12 09:20:07.023901] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:33.200 [2024-12-12 09:20:07.024016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.200 [2024-12-12 09:20:07.024073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:33.200 [2024-12-12 09:20:07.059777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.200 [2024-12-12 09:20:07.060241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.200 [2024-12-12 09:20:07.060308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.106 [2024-12-12 09:20:08.951217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.486 09:20:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:36.486 00:06:36.486 real 0m5.070s 00:06:36.486 user 0m4.761s 00:06:36.487 sys 0m0.740s 00:06:36.487 09:20:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.487 09:20:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.487 ************************************ 00:06:36.487 END TEST raid1_resize_data_offset_test 00:06:36.487 ************************************ 00:06:36.487 09:20:10 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:36.487 09:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.487 09:20:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.487 09:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.487 ************************************ 00:06:36.487 START TEST raid0_resize_superblock_test 00:06:36.487 ************************************ 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61268 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61268' 00:06:36.487 Process raid pid: 61268 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61268 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61268 ']' 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.487 09:20:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.487 [2024-12-12 09:20:10.309375] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:36.487 [2024-12-12 09:20:10.309487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.487 [2024-12-12 09:20:10.485408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.746 [2024-12-12 09:20:10.621286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.005 [2024-12-12 09:20:10.861988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.005 [2024-12-12 09:20:10.862039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.263 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.263 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.263 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:37.263 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.263 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.832 malloc0 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.832 [2024-12-12 09:20:11.756336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.832 [2024-12-12 09:20:11.756403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.832 [2024-12-12 09:20:11.756428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:37.832 [2024-12-12 09:20:11.756443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.832 [2024-12-12 09:20:11.758912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.832 [2024-12-12 09:20:11.758953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.832 pt0 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.832 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 4afa6aa5-539f-452b-88f3-ac765614a595 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 6d338ff3-9181-4675-a049-fc09d04e502d 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 7de9fcd9-8e92-476e-bbc3-1aa4257a8557 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 [2024-12-12 09:20:11.966354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6d338ff3-9181-4675-a049-fc09d04e502d is claimed 00:06:38.105 [2024-12-12 09:20:11.966506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7de9fcd9-8e92-476e-bbc3-1aa4257a8557 is claimed 00:06:38.105 [2024-12-12 09:20:11.966662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:38.105 [2024-12-12 09:20:11.966682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:38.105 [2024-12-12 09:20:11.967010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:38.105 [2024-12-12 09:20:11.967213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:38.105 [2024-12-12 09:20:11.967224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:38.105 [2024-12-12 09:20:11.967367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 09:20:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 [2024-12-12 09:20:12.078345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.105 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 [2024-12-12 09:20:12.126244] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.106 [2024-12-12 09:20:12.126315] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6d338ff3-9181-4675-a049-fc09d04e502d' was resized: old size 131072, new size 204800 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 [2024-12-12 09:20:12.138174] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.364 [2024-12-12 09:20:12.138196] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7de9fcd9-8e92-476e-bbc3-1aa4257a8557' was resized: old size 131072, new size 204800 00:06:38.364 [2024-12-12 09:20:12.138223] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:38.364 [2024-12-12 09:20:12.250069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 [2024-12-12 09:20:12.297811] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:38.364 [2024-12-12 09:20:12.297878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:38.364 [2024-12-12 09:20:12.297890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.364 [2024-12-12 09:20:12.297905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:38.364 [2024-12-12 09:20:12.298030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.364 [2024-12-12 09:20:12.298066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.364 [2024-12-12 09:20:12.298077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.364 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 [2024-12-12 09:20:12.309718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.365 [2024-12-12 09:20:12.309762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.365 [2024-12-12 09:20:12.309779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:38.365 [2024-12-12 09:20:12.309790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.365 [2024-12-12 09:20:12.312155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.365 [2024-12-12 09:20:12.312193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.365 [2024-12-12 09:20:12.313795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6d338ff3-9181-4675-a049-fc09d04e502d 00:06:38.365 [2024-12-12 09:20:12.313883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6d338ff3-9181-4675-a049-fc09d04e502d is claimed 00:06:38.365 [2024-12-12 09:20:12.314033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7de9fcd9-8e92-476e-bbc3-1aa4257a8557 00:06:38.365 [2024-12-12 09:20:12.314054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7de9fcd9-8e92-476e-bbc3-1aa4257a8557 is claimed 00:06:38.365 [2024-12-12 09:20:12.314229] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7de9fcd9-8e92-476e-bbc3-1aa4257a8557 (2) smaller than existing raid bdev Raid (3) 00:06:38.365 [2024-12-12 09:20:12.314253] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 6d338ff3-9181-4675-a049-fc09d04e502d: File exists 00:06:38.365 [2024-12-12 09:20:12.314286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:38.365 [2024-12-12 09:20:12.314298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:38.365 [2024-12-12 09:20:12.314553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:38.365 pt0 00:06:38.365 [2024-12-12 09:20:12.314766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:38.365 [2024-12-12 09:20:12.314777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:38.365 [2024-12-12 09:20:12.314924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.365 [2024-12-12 09:20:12.338036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61268 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61268 ']' 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61268 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.365 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61268 00:06:38.624 killing process with pid 61268 00:06:38.624 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.624 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.624 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61268' 00:06:38.624 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61268 00:06:38.624 [2024-12-12 09:20:12.419580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.624 [2024-12-12 09:20:12.419682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.624 [2024-12-12 09:20:12.419732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.624 [2024-12-12 09:20:12.419743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.624 09:20:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61268 00:06:40.003 [2024-12-12 09:20:13.969204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.393 ************************************ 00:06:41.393 END TEST raid0_resize_superblock_test 00:06:41.393 ************************************ 00:06:41.393 09:20:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:41.393 00:06:41.393 real 0m4.976s 00:06:41.393 user 0m4.994s 00:06:41.393 sys 0m0.751s 00:06:41.393 09:20:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.393 09:20:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.393 09:20:15 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:41.393 09:20:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.393 09:20:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.393 09:20:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.393 ************************************ 00:06:41.393 START TEST raid1_resize_superblock_test 00:06:41.393 ************************************ 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61372 00:06:41.393 Process raid pid: 61372 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61372' 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61372 00:06:41.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61372 ']' 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.393 09:20:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.393 [2024-12-12 09:20:15.359977] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:41.393 [2024-12-12 09:20:15.360102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.652 [2024-12-12 09:20:15.541315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.910 [2024-12-12 09:20:15.679055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.910 [2024-12-12 09:20:15.918067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.910 [2024-12-12 09:20:15.918128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.476 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.476 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.476 09:20:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:42.476 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.476 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 malloc0 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 [2024-12-12 09:20:16.811782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.045 [2024-12-12 09:20:16.811846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.045 [2024-12-12 09:20:16.811874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:43.045 [2024-12-12 09:20:16.811889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.045 [2024-12-12 09:20:16.814384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.045 [2024-12-12 09:20:16.814483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.045 pt0 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 28548a9b-b0fe-4858-bb83-d0dc4ad2ab6b 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 811249be-0aad-4004-b784-ed732dd02fe5 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 9c44b653-8fc5-4e66-b383-b75adfb065f3 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 [2024-12-12 09:20:17.022807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 811249be-0aad-4004-b784-ed732dd02fe5 is claimed 00:06:43.045 [2024-12-12 09:20:17.022919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c44b653-8fc5-4e66-b383-b75adfb065f3 is claimed 00:06:43.045 [2024-12-12 09:20:17.023080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.045 [2024-12-12 09:20:17.023099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:43.045 [2024-12-12 09:20:17.023409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.045 [2024-12-12 09:20:17.023637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.045 [2024-12-12 09:20:17.023655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:43.045 [2024-12-12 09:20:17.023847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.045 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:43.304 [2024-12-12 09:20:17.138877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 [2024-12-12 09:20:17.190828] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.304 [2024-12-12 09:20:17.190862] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '811249be-0aad-4004-b784-ed732dd02fe5' was resized: old size 131072, new size 204800 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 [2024-12-12 09:20:17.202599] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.304 [2024-12-12 09:20:17.202623] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9c44b653-8fc5-4e66-b383-b75adfb065f3' was resized: old size 131072, new size 204800 00:06:43.304 [2024-12-12 09:20:17.202650] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 [2024-12-12 09:20:17.290512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.304 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 [2024-12-12 09:20:17.334252] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.562 [2024-12-12 09:20:17.334322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.562 [2024-12-12 09:20:17.334349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.562 [2024-12-12 09:20:17.334535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.562 [2024-12-12 09:20:17.334728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.562 [2024-12-12 09:20:17.334792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.562 [2024-12-12 09:20:17.334806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 [2024-12-12 09:20:17.346171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.562 [2024-12-12 09:20:17.346221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.562 [2024-12-12 09:20:17.346244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.562 [2024-12-12 09:20:17.346256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.562 [2024-12-12 09:20:17.348779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.562 [2024-12-12 09:20:17.348818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.562 [2024-12-12 09:20:17.350565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 811249be-0aad-4004-b784-ed732dd02fe5 00:06:43.562 [2024-12-12 09:20:17.350645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 811249be-0aad-4004-b784-ed732dd02fe5 is claimed 00:06:43.562 [2024-12-12 09:20:17.350762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9c44b653-8fc5-4e66-b383-b75adfb065f3 00:06:43.562 [2024-12-12 09:20:17.350780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c44b653-8fc5-4e66-b383-b75adfb065f3 is claimed 00:06:43.562 [2024-12-12 09:20:17.350933] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9c44b653-8fc5-4e66-b383-b75adfb065f3 (2) smaller than existing raid bdev Raid (3) 00:06:43.562 [2024-12-12 09:20:17.350972] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 811249be-0aad-4004-b784-ed732dd02fe5: File exists 00:06:43.562 [2024-12-12 09:20:17.351007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:43.562 [2024-12-12 09:20:17.351021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:43.562 pt0 00:06:43.562 [2024-12-12 09:20:17.351280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:43.562 [2024-12-12 09:20:17.351462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:43.562 [2024-12-12 09:20:17.351478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:43.562 [2024-12-12 09:20:17.351634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:43.562 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 [2024-12-12 09:20:17.374797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61372 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61372 ']' 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61372 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61372 00:06:43.563 killing process with pid 61372 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61372' 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61372 00:06:43.563 [2024-12-12 09:20:17.454419] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.563 [2024-12-12 09:20:17.454502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.563 [2024-12-12 09:20:17.454554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.563 [2024-12-12 09:20:17.454564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.563 09:20:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61372 00:06:45.468 [2024-12-12 09:20:19.012445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.405 09:20:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:46.405 00:06:46.405 real 0m4.972s 00:06:46.405 user 0m4.956s 00:06:46.405 sys 0m0.771s 00:06:46.405 ************************************ 00:06:46.405 END TEST raid1_resize_superblock_test 00:06:46.405 ************************************ 00:06:46.405 09:20:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.405 09:20:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:46.405 09:20:20 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:46.405 09:20:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.405 09:20:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.405 09:20:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.405 ************************************ 00:06:46.405 START TEST raid_function_test_raid0 00:06:46.405 ************************************ 00:06:46.405 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:46.405 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:46.405 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:46.405 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=61480 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61480' 00:06:46.406 Process raid pid: 61480 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 61480 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 61480 ']' 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.406 09:20:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.406 [2024-12-12 09:20:20.425601] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:46.406 [2024-12-12 09:20:20.425832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.665 [2024-12-12 09:20:20.607625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.924 [2024-12-12 09:20:20.747005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.183 [2024-12-12 09:20:20.983780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.183 [2024-12-12 09:20:20.983945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.443 Base_1 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.443 Base_2 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.443 [2024-12-12 09:20:21.369749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:47.443 [2024-12-12 09:20:21.371871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:47.443 [2024-12-12 09:20:21.371943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:47.443 [2024-12-12 09:20:21.371955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:47.443 [2024-12-12 09:20:21.372259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.443 [2024-12-12 09:20:21.372421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:47.443 [2024-12-12 09:20:21.372431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:47.443 [2024-12-12 09:20:21.372582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.443 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:47.702 [2024-12-12 09:20:21.621418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:47.702 /dev/nbd0 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.702 1+0 records in 00:06:47.702 1+0 records out 00:06:47.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047335 s, 8.7 MB/s 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.702 09:20:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:47.703 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.703 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.703 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.703 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.703 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.962 { 00:06:47.962 "nbd_device": "/dev/nbd0", 00:06:47.962 "bdev_name": "raid" 00:06:47.962 } 00:06:47.962 ]' 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.962 { 00:06:47.962 "nbd_device": "/dev/nbd0", 00:06:47.962 "bdev_name": "raid" 00:06:47.962 } 00:06:47.962 ]' 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:47.962 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:48.221 09:20:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:48.222 4096+0 records in 00:06:48.222 4096+0 records out 00:06:48.222 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326896 s, 64.2 MB/s 00:06:48.222 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:48.480 4096+0 records in 00:06:48.480 4096+0 records out 00:06:48.480 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.214912 s, 9.8 MB/s 00:06:48.480 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:48.480 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:48.481 128+0 records in 00:06:48.481 128+0 records out 00:06:48.481 65536 bytes (66 kB, 64 KiB) copied, 0.00117118 s, 56.0 MB/s 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:48.481 2035+0 records in 00:06:48.481 2035+0 records out 00:06:48.481 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130046 s, 80.1 MB/s 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:48.481 456+0 records in 00:06:48.481 456+0 records out 00:06:48.481 233472 bytes (233 kB, 228 KiB) copied, 0.00411261 s, 56.8 MB/s 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.481 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.741 [2024-12-12 09:20:22.581686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.741 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 61480 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 61480 ']' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 61480 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61480 00:06:49.001 killing process with pid 61480 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61480' 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 61480 00:06:49.001 [2024-12-12 09:20:22.880160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.001 09:20:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 61480 00:06:49.001 [2024-12-12 09:20:22.880303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.001 [2024-12-12 09:20:22.880363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.001 [2024-12-12 09:20:22.880379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:49.261 [2024-12-12 09:20:23.102558] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.657 09:20:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:50.657 00:06:50.657 real 0m3.989s 00:06:50.657 user 0m4.491s 00:06:50.657 sys 0m1.072s 00:06:50.657 09:20:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.657 ************************************ 00:06:50.657 END TEST raid_function_test_raid0 00:06:50.657 ************************************ 00:06:50.657 09:20:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:50.657 09:20:24 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:50.657 09:20:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.657 09:20:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.657 09:20:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.657 ************************************ 00:06:50.657 START TEST raid_function_test_concat 00:06:50.657 ************************************ 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:50.657 Process raid pid: 61609 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=61609 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61609' 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 61609 00:06:50.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 61609 ']' 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.657 09:20:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.658 [2024-12-12 09:20:24.469288] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:50.658 [2024-12-12 09:20:24.469539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.658 [2024-12-12 09:20:24.652584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.917 [2024-12-12 09:20:24.793558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.177 [2024-12-12 09:20:25.036032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.177 [2024-12-12 09:20:25.036195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.436 Base_1 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.436 Base_2 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.436 [2024-12-12 09:20:25.395628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:51.436 [2024-12-12 09:20:25.397795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:51.436 [2024-12-12 09:20:25.397867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.436 [2024-12-12 09:20:25.397879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.436 [2024-12-12 09:20:25.398153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.436 [2024-12-12 09:20:25.398310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.436 [2024-12-12 09:20:25.398341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:51.436 [2024-12-12 09:20:25.398496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.436 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:51.695 [2024-12-12 09:20:25.643297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:51.695 /dev/nbd0 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.695 1+0 records in 00:06:51.695 1+0 records out 00:06:51.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464166 s, 8.8 MB/s 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.695 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.954 { 00:06:51.954 "nbd_device": "/dev/nbd0", 00:06:51.954 "bdev_name": "raid" 00:06:51.954 } 00:06:51.954 ]' 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.954 { 00:06:51.954 "nbd_device": "/dev/nbd0", 00:06:51.954 "bdev_name": "raid" 00:06:51.954 } 00:06:51.954 ]' 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:51.954 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:52.213 09:20:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:52.213 4096+0 records in 00:06:52.213 4096+0 records out 00:06:52.213 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0325238 s, 64.5 MB/s 00:06:52.213 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:52.473 4096+0 records in 00:06:52.473 4096+0 records out 00:06:52.473 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.222552 s, 9.4 MB/s 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:52.473 128+0 records in 00:06:52.473 128+0 records out 00:06:52.473 65536 bytes (66 kB, 64 KiB) copied, 0.000921197 s, 71.1 MB/s 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:52.473 2035+0 records in 00:06:52.473 2035+0 records out 00:06:52.473 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00863337 s, 121 MB/s 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:52.473 456+0 records in 00:06:52.473 456+0 records out 00:06:52.473 233472 bytes (233 kB, 228 KiB) copied, 0.00351159 s, 66.5 MB/s 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.473 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.732 [2024-12-12 09:20:26.581499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.732 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 61609 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 61609 ']' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 61609 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61609 00:06:52.991 killing process with pid 61609 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61609' 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 61609 00:06:52.991 [2024-12-12 09:20:26.891158] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.991 09:20:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 61609 00:06:52.991 [2024-12-12 09:20:26.891300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.991 [2024-12-12 09:20:26.891364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.991 [2024-12-12 09:20:26.891376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:53.250 [2024-12-12 09:20:27.112308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.629 ************************************ 00:06:54.629 END TEST raid_function_test_concat 00:06:54.629 ************************************ 00:06:54.629 09:20:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:54.629 00:06:54.629 real 0m3.940s 00:06:54.629 user 0m4.403s 00:06:54.629 sys 0m1.057s 00:06:54.629 09:20:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.629 09:20:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:54.629 09:20:28 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:54.629 09:20:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.629 09:20:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.629 09:20:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.629 ************************************ 00:06:54.629 START TEST raid0_resize_test 00:06:54.629 ************************************ 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61726 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61726' 00:06:54.629 Process raid pid: 61726 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61726 00:06:54.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61726 ']' 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.629 09:20:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.629 [2024-12-12 09:20:28.480992] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:54.629 [2024-12-12 09:20:28.481109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.629 [2024-12-12 09:20:28.639428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.888 [2024-12-12 09:20:28.777171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.148 [2024-12-12 09:20:29.015212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.148 [2024-12-12 09:20:29.015279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.407 Base_1 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.407 Base_2 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.407 [2024-12-12 09:20:29.326555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:55.407 [2024-12-12 09:20:29.328718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:55.407 [2024-12-12 09:20:29.328774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.407 [2024-12-12 09:20:29.328786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:55.407 [2024-12-12 09:20:29.329048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.407 [2024-12-12 09:20:29.329177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.407 [2024-12-12 09:20:29.329186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.407 [2024-12-12 09:20:29.329332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.407 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.407 [2024-12-12 09:20:29.334512] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.408 [2024-12-12 09:20:29.334538] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:55.408 true 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 [2024-12-12 09:20:29.350653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 [2024-12-12 09:20:29.398399] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.408 [2024-12-12 09:20:29.398467] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:55.408 [2024-12-12 09:20:29.398530] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:55.408 true 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.408 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 [2024-12-12 09:20:29.414517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61726 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61726 ']' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 61726 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61726 00:06:55.667 killing process with pid 61726 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61726' 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 61726 00:06:55.667 [2024-12-12 09:20:29.488100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.667 [2024-12-12 09:20:29.488178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.667 [2024-12-12 09:20:29.488222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.667 [2024-12-12 09:20:29.488231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.667 09:20:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 61726 00:06:55.667 [2024-12-12 09:20:29.506815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.045 ************************************ 00:06:57.045 END TEST raid0_resize_test 00:06:57.045 ************************************ 00:06:57.045 09:20:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:57.045 00:06:57.045 real 0m2.333s 00:06:57.045 user 0m2.396s 00:06:57.045 sys 0m0.403s 00:06:57.046 09:20:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.046 09:20:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 09:20:30 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:57.046 09:20:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.046 09:20:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.046 09:20:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 ************************************ 00:06:57.046 START TEST raid1_resize_test 00:06:57.046 ************************************ 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61793 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61793' 00:06:57.046 Process raid pid: 61793 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61793 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61793 ']' 00:06:57.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.046 09:20:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 [2024-12-12 09:20:30.890491] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:57.046 [2024-12-12 09:20:30.890626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.046 [2024-12-12 09:20:31.068246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.304 [2024-12-12 09:20:31.211505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.575 [2024-12-12 09:20:31.450674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.575 [2024-12-12 09:20:31.450732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 Base_1 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 Base_2 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 [2024-12-12 09:20:31.743381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.836 [2024-12-12 09:20:31.745445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.836 [2024-12-12 09:20:31.745573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.836 [2024-12-12 09:20:31.745591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:57.836 [2024-12-12 09:20:31.745846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.836 [2024-12-12 09:20:31.746002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.836 [2024-12-12 09:20:31.746013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.836 [2024-12-12 09:20:31.746161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 [2024-12-12 09:20:31.751346] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.836 [2024-12-12 09:20:31.751376] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:57.836 true 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 [2024-12-12 09:20:31.763481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.836 [2024-12-12 09:20:31.815222] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.836 [2024-12-12 09:20:31.815245] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:57.836 [2024-12-12 09:20:31.815273] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:57.836 true 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:57.836 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.837 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.837 [2024-12-12 09:20:31.831347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.837 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:58.096 09:20:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61793 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61793 ']' 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61793 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61793 00:06:58.097 killing process with pid 61793 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61793' 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61793 00:06:58.097 [2024-12-12 09:20:31.905799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.097 [2024-12-12 09:20:31.905884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.097 09:20:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61793 00:06:58.097 [2024-12-12 09:20:31.906404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.097 [2024-12-12 09:20:31.906427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:58.097 [2024-12-12 09:20:31.923474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.477 09:20:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:59.477 00:06:59.477 real 0m2.348s 00:06:59.477 user 0m2.398s 00:06:59.477 sys 0m0.433s 00:06:59.477 09:20:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.477 ************************************ 00:06:59.477 END TEST raid1_resize_test 00:06:59.477 ************************************ 00:06:59.477 09:20:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.477 09:20:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:59.477 09:20:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:59.477 09:20:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:59.477 09:20:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:59.477 09:20:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.477 09:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.477 ************************************ 00:06:59.477 START TEST raid_state_function_test 00:06:59.477 ************************************ 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:59.477 Process raid pid: 61850 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61850 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61850' 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61850 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61850 ']' 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.477 09:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.477 [2024-12-12 09:20:33.290392] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:59.477 [2024-12-12 09:20:33.290623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.477 [2024-12-12 09:20:33.451049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.737 [2024-12-12 09:20:33.591020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.996 [2024-12-12 09:20:33.812835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.996 [2024-12-12 09:20:33.813006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.255 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.256 [2024-12-12 09:20:34.133045] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.256 [2024-12-12 09:20:34.133158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.256 [2024-12-12 09:20:34.133188] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.256 [2024-12-12 09:20:34.133212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.256 "name": "Existed_Raid", 00:07:00.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.256 "strip_size_kb": 64, 00:07:00.256 "state": "configuring", 00:07:00.256 "raid_level": "raid0", 00:07:00.256 "superblock": false, 00:07:00.256 "num_base_bdevs": 2, 00:07:00.256 "num_base_bdevs_discovered": 0, 00:07:00.256 "num_base_bdevs_operational": 2, 00:07:00.256 "base_bdevs_list": [ 00:07:00.256 { 00:07:00.256 "name": "BaseBdev1", 00:07:00.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.256 "is_configured": false, 00:07:00.256 "data_offset": 0, 00:07:00.256 "data_size": 0 00:07:00.256 }, 00:07:00.256 { 00:07:00.256 "name": "BaseBdev2", 00:07:00.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.256 "is_configured": false, 00:07:00.256 "data_offset": 0, 00:07:00.256 "data_size": 0 00:07:00.256 } 00:07:00.256 ] 00:07:00.256 }' 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.256 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.825 [2024-12-12 09:20:34.564247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.825 [2024-12-12 09:20:34.564288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.825 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.825 [2024-12-12 09:20:34.576187] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.825 [2024-12-12 09:20:34.576233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.825 [2024-12-12 09:20:34.576242] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.825 [2024-12-12 09:20:34.576254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 [2024-12-12 09:20:34.628524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.826 BaseBdev1 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 [ 00:07:00.826 { 00:07:00.826 "name": "BaseBdev1", 00:07:00.826 "aliases": [ 00:07:00.826 "158dac73-89a6-4ff5-9fb0-277e1e5b1e50" 00:07:00.826 ], 00:07:00.826 "product_name": "Malloc disk", 00:07:00.826 "block_size": 512, 00:07:00.826 "num_blocks": 65536, 00:07:00.826 "uuid": "158dac73-89a6-4ff5-9fb0-277e1e5b1e50", 00:07:00.826 "assigned_rate_limits": { 00:07:00.826 "rw_ios_per_sec": 0, 00:07:00.826 "rw_mbytes_per_sec": 0, 00:07:00.826 "r_mbytes_per_sec": 0, 00:07:00.826 "w_mbytes_per_sec": 0 00:07:00.826 }, 00:07:00.826 "claimed": true, 00:07:00.826 "claim_type": "exclusive_write", 00:07:00.826 "zoned": false, 00:07:00.826 "supported_io_types": { 00:07:00.826 "read": true, 00:07:00.826 "write": true, 00:07:00.826 "unmap": true, 00:07:00.826 "flush": true, 00:07:00.826 "reset": true, 00:07:00.826 "nvme_admin": false, 00:07:00.826 "nvme_io": false, 00:07:00.826 "nvme_io_md": false, 00:07:00.826 "write_zeroes": true, 00:07:00.826 "zcopy": true, 00:07:00.826 "get_zone_info": false, 00:07:00.826 "zone_management": false, 00:07:00.826 "zone_append": false, 00:07:00.826 "compare": false, 00:07:00.826 "compare_and_write": false, 00:07:00.826 "abort": true, 00:07:00.826 "seek_hole": false, 00:07:00.826 "seek_data": false, 00:07:00.826 "copy": true, 00:07:00.826 "nvme_iov_md": false 00:07:00.826 }, 00:07:00.826 "memory_domains": [ 00:07:00.826 { 00:07:00.826 "dma_device_id": "system", 00:07:00.826 "dma_device_type": 1 00:07:00.826 }, 00:07:00.826 { 00:07:00.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.826 "dma_device_type": 2 00:07:00.826 } 00:07:00.826 ], 00:07:00.826 "driver_specific": {} 00:07:00.826 } 00:07:00.826 ] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.826 "name": "Existed_Raid", 00:07:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.826 "strip_size_kb": 64, 00:07:00.826 "state": "configuring", 00:07:00.826 "raid_level": "raid0", 00:07:00.826 "superblock": false, 00:07:00.826 "num_base_bdevs": 2, 00:07:00.826 "num_base_bdevs_discovered": 1, 00:07:00.826 "num_base_bdevs_operational": 2, 00:07:00.826 "base_bdevs_list": [ 00:07:00.826 { 00:07:00.826 "name": "BaseBdev1", 00:07:00.826 "uuid": "158dac73-89a6-4ff5-9fb0-277e1e5b1e50", 00:07:00.826 "is_configured": true, 00:07:00.826 "data_offset": 0, 00:07:00.826 "data_size": 65536 00:07:00.826 }, 00:07:00.826 { 00:07:00.826 "name": "BaseBdev2", 00:07:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.826 "is_configured": false, 00:07:00.826 "data_offset": 0, 00:07:00.826 "data_size": 0 00:07:00.826 } 00:07:00.826 ] 00:07:00.826 }' 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.826 09:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.086 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:01.086 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.086 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.345 [2024-12-12 09:20:35.111815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:01.345 [2024-12-12 09:20:35.111937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.345 [2024-12-12 09:20:35.123847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:01.345 [2024-12-12 09:20:35.126036] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.345 [2024-12-12 09:20:35.126123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.345 "name": "Existed_Raid", 00:07:01.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.345 "strip_size_kb": 64, 00:07:01.345 "state": "configuring", 00:07:01.345 "raid_level": "raid0", 00:07:01.345 "superblock": false, 00:07:01.345 "num_base_bdevs": 2, 00:07:01.345 "num_base_bdevs_discovered": 1, 00:07:01.345 "num_base_bdevs_operational": 2, 00:07:01.345 "base_bdevs_list": [ 00:07:01.345 { 00:07:01.345 "name": "BaseBdev1", 00:07:01.345 "uuid": "158dac73-89a6-4ff5-9fb0-277e1e5b1e50", 00:07:01.345 "is_configured": true, 00:07:01.345 "data_offset": 0, 00:07:01.345 "data_size": 65536 00:07:01.345 }, 00:07:01.345 { 00:07:01.345 "name": "BaseBdev2", 00:07:01.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.345 "is_configured": false, 00:07:01.345 "data_offset": 0, 00:07:01.345 "data_size": 0 00:07:01.345 } 00:07:01.345 ] 00:07:01.345 }' 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.345 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.605 [2024-12-12 09:20:35.550371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.605 [2024-12-12 09:20:35.550433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:01.605 [2024-12-12 09:20:35.550444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.605 [2024-12-12 09:20:35.550739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.605 [2024-12-12 09:20:35.550970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:01.605 [2024-12-12 09:20:35.550985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:01.605 [2024-12-12 09:20:35.551449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.605 BaseBdev2 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.605 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.606 [ 00:07:01.606 { 00:07:01.606 "name": "BaseBdev2", 00:07:01.606 "aliases": [ 00:07:01.606 "652b1ae7-8f3d-425f-868a-18e6246ee028" 00:07:01.606 ], 00:07:01.606 "product_name": "Malloc disk", 00:07:01.606 "block_size": 512, 00:07:01.606 "num_blocks": 65536, 00:07:01.606 "uuid": "652b1ae7-8f3d-425f-868a-18e6246ee028", 00:07:01.606 "assigned_rate_limits": { 00:07:01.606 "rw_ios_per_sec": 0, 00:07:01.606 "rw_mbytes_per_sec": 0, 00:07:01.606 "r_mbytes_per_sec": 0, 00:07:01.606 "w_mbytes_per_sec": 0 00:07:01.606 }, 00:07:01.606 "claimed": true, 00:07:01.606 "claim_type": "exclusive_write", 00:07:01.606 "zoned": false, 00:07:01.606 "supported_io_types": { 00:07:01.606 "read": true, 00:07:01.606 "write": true, 00:07:01.606 "unmap": true, 00:07:01.606 "flush": true, 00:07:01.606 "reset": true, 00:07:01.606 "nvme_admin": false, 00:07:01.606 "nvme_io": false, 00:07:01.606 "nvme_io_md": false, 00:07:01.606 "write_zeroes": true, 00:07:01.606 "zcopy": true, 00:07:01.606 "get_zone_info": false, 00:07:01.606 "zone_management": false, 00:07:01.606 "zone_append": false, 00:07:01.606 "compare": false, 00:07:01.606 "compare_and_write": false, 00:07:01.606 "abort": true, 00:07:01.606 "seek_hole": false, 00:07:01.606 "seek_data": false, 00:07:01.606 "copy": true, 00:07:01.606 "nvme_iov_md": false 00:07:01.606 }, 00:07:01.606 "memory_domains": [ 00:07:01.606 { 00:07:01.606 "dma_device_id": "system", 00:07:01.606 "dma_device_type": 1 00:07:01.606 }, 00:07:01.606 { 00:07:01.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.606 "dma_device_type": 2 00:07:01.606 } 00:07:01.606 ], 00:07:01.606 "driver_specific": {} 00:07:01.606 } 00:07:01.606 ] 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.606 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.865 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.865 "name": "Existed_Raid", 00:07:01.865 "uuid": "c1b08de1-1684-4719-bd2b-dcfbacd9c356", 00:07:01.865 "strip_size_kb": 64, 00:07:01.865 "state": "online", 00:07:01.865 "raid_level": "raid0", 00:07:01.865 "superblock": false, 00:07:01.865 "num_base_bdevs": 2, 00:07:01.865 "num_base_bdevs_discovered": 2, 00:07:01.865 "num_base_bdevs_operational": 2, 00:07:01.865 "base_bdevs_list": [ 00:07:01.865 { 00:07:01.865 "name": "BaseBdev1", 00:07:01.865 "uuid": "158dac73-89a6-4ff5-9fb0-277e1e5b1e50", 00:07:01.865 "is_configured": true, 00:07:01.865 "data_offset": 0, 00:07:01.866 "data_size": 65536 00:07:01.866 }, 00:07:01.866 { 00:07:01.866 "name": "BaseBdev2", 00:07:01.866 "uuid": "652b1ae7-8f3d-425f-868a-18e6246ee028", 00:07:01.866 "is_configured": true, 00:07:01.866 "data_offset": 0, 00:07:01.866 "data_size": 65536 00:07:01.866 } 00:07:01.866 ] 00:07:01.866 }' 00:07:01.866 09:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.866 09:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.125 [2024-12-12 09:20:36.017927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.125 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:02.125 "name": "Existed_Raid", 00:07:02.125 "aliases": [ 00:07:02.125 "c1b08de1-1684-4719-bd2b-dcfbacd9c356" 00:07:02.125 ], 00:07:02.125 "product_name": "Raid Volume", 00:07:02.125 "block_size": 512, 00:07:02.125 "num_blocks": 131072, 00:07:02.125 "uuid": "c1b08de1-1684-4719-bd2b-dcfbacd9c356", 00:07:02.125 "assigned_rate_limits": { 00:07:02.125 "rw_ios_per_sec": 0, 00:07:02.125 "rw_mbytes_per_sec": 0, 00:07:02.125 "r_mbytes_per_sec": 0, 00:07:02.125 "w_mbytes_per_sec": 0 00:07:02.125 }, 00:07:02.125 "claimed": false, 00:07:02.125 "zoned": false, 00:07:02.126 "supported_io_types": { 00:07:02.126 "read": true, 00:07:02.126 "write": true, 00:07:02.126 "unmap": true, 00:07:02.126 "flush": true, 00:07:02.126 "reset": true, 00:07:02.126 "nvme_admin": false, 00:07:02.126 "nvme_io": false, 00:07:02.126 "nvme_io_md": false, 00:07:02.126 "write_zeroes": true, 00:07:02.126 "zcopy": false, 00:07:02.126 "get_zone_info": false, 00:07:02.126 "zone_management": false, 00:07:02.126 "zone_append": false, 00:07:02.126 "compare": false, 00:07:02.126 "compare_and_write": false, 00:07:02.126 "abort": false, 00:07:02.126 "seek_hole": false, 00:07:02.126 "seek_data": false, 00:07:02.126 "copy": false, 00:07:02.126 "nvme_iov_md": false 00:07:02.126 }, 00:07:02.126 "memory_domains": [ 00:07:02.126 { 00:07:02.126 "dma_device_id": "system", 00:07:02.126 "dma_device_type": 1 00:07:02.126 }, 00:07:02.126 { 00:07:02.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.126 "dma_device_type": 2 00:07:02.126 }, 00:07:02.126 { 00:07:02.126 "dma_device_id": "system", 00:07:02.126 "dma_device_type": 1 00:07:02.126 }, 00:07:02.126 { 00:07:02.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.126 "dma_device_type": 2 00:07:02.126 } 00:07:02.126 ], 00:07:02.126 "driver_specific": { 00:07:02.126 "raid": { 00:07:02.126 "uuid": "c1b08de1-1684-4719-bd2b-dcfbacd9c356", 00:07:02.126 "strip_size_kb": 64, 00:07:02.126 "state": "online", 00:07:02.126 "raid_level": "raid0", 00:07:02.126 "superblock": false, 00:07:02.126 "num_base_bdevs": 2, 00:07:02.126 "num_base_bdevs_discovered": 2, 00:07:02.126 "num_base_bdevs_operational": 2, 00:07:02.126 "base_bdevs_list": [ 00:07:02.126 { 00:07:02.126 "name": "BaseBdev1", 00:07:02.126 "uuid": "158dac73-89a6-4ff5-9fb0-277e1e5b1e50", 00:07:02.126 "is_configured": true, 00:07:02.126 "data_offset": 0, 00:07:02.126 "data_size": 65536 00:07:02.126 }, 00:07:02.126 { 00:07:02.126 "name": "BaseBdev2", 00:07:02.126 "uuid": "652b1ae7-8f3d-425f-868a-18e6246ee028", 00:07:02.126 "is_configured": true, 00:07:02.126 "data_offset": 0, 00:07:02.126 "data_size": 65536 00:07:02.126 } 00:07:02.126 ] 00:07:02.126 } 00:07:02.126 } 00:07:02.126 }' 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:02.126 BaseBdev2' 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.126 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 [2024-12-12 09:20:36.245318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:02.386 [2024-12-12 09:20:36.245433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.386 [2024-12-12 09:20:36.245519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.386 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.386 "name": "Existed_Raid", 00:07:02.386 "uuid": "c1b08de1-1684-4719-bd2b-dcfbacd9c356", 00:07:02.386 "strip_size_kb": 64, 00:07:02.386 "state": "offline", 00:07:02.386 "raid_level": "raid0", 00:07:02.386 "superblock": false, 00:07:02.386 "num_base_bdevs": 2, 00:07:02.386 "num_base_bdevs_discovered": 1, 00:07:02.386 "num_base_bdevs_operational": 1, 00:07:02.386 "base_bdevs_list": [ 00:07:02.386 { 00:07:02.386 "name": null, 00:07:02.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.386 "is_configured": false, 00:07:02.386 "data_offset": 0, 00:07:02.386 "data_size": 65536 00:07:02.386 }, 00:07:02.386 { 00:07:02.386 "name": "BaseBdev2", 00:07:02.386 "uuid": "652b1ae7-8f3d-425f-868a-18e6246ee028", 00:07:02.386 "is_configured": true, 00:07:02.386 "data_offset": 0, 00:07:02.386 "data_size": 65536 00:07:02.386 } 00:07:02.386 ] 00:07:02.386 }' 00:07:02.387 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.387 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.956 [2024-12-12 09:20:36.833767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:02.956 [2024-12-12 09:20:36.833904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:02.956 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.957 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61850 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61850 ']' 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61850 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.216 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61850 00:07:03.216 killing process with pid 61850 00:07:03.216 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.216 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.216 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61850' 00:07:03.216 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61850 00:07:03.216 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61850 00:07:03.217 [2024-12-12 09:20:37.030719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.217 [2024-12-12 09:20:37.048841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:04.597 ************************************ 00:07:04.597 END TEST raid_state_function_test 00:07:04.597 ************************************ 00:07:04.597 00:07:04.597 real 0m5.071s 00:07:04.597 user 0m7.128s 00:07:04.597 sys 0m0.874s 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.597 09:20:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:04.597 09:20:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.597 09:20:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.597 09:20:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.597 ************************************ 00:07:04.597 START TEST raid_state_function_test_sb 00:07:04.597 ************************************ 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62103 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62103' 00:07:04.597 Process raid pid: 62103 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62103 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62103 ']' 00:07:04.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.597 09:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.597 [2024-12-12 09:20:38.448363] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:04.597 [2024-12-12 09:20:38.448558] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.597 [2024-12-12 09:20:38.603689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.856 [2024-12-12 09:20:38.745338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.116 [2024-12-12 09:20:38.993891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.116 [2024-12-12 09:20:38.993949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.374 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.374 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:05.374 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.374 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.374 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.374 [2024-12-12 09:20:39.285512] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.375 [2024-12-12 09:20:39.285640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.375 [2024-12-12 09:20:39.285656] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.375 [2024-12-12 09:20:39.285666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.375 "name": "Existed_Raid", 00:07:05.375 "uuid": "b0da90a5-33f6-4f26-8811-468bf77d1c28", 00:07:05.375 "strip_size_kb": 64, 00:07:05.375 "state": "configuring", 00:07:05.375 "raid_level": "raid0", 00:07:05.375 "superblock": true, 00:07:05.375 "num_base_bdevs": 2, 00:07:05.375 "num_base_bdevs_discovered": 0, 00:07:05.375 "num_base_bdevs_operational": 2, 00:07:05.375 "base_bdevs_list": [ 00:07:05.375 { 00:07:05.375 "name": "BaseBdev1", 00:07:05.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.375 "is_configured": false, 00:07:05.375 "data_offset": 0, 00:07:05.375 "data_size": 0 00:07:05.375 }, 00:07:05.375 { 00:07:05.375 "name": "BaseBdev2", 00:07:05.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.375 "is_configured": false, 00:07:05.375 "data_offset": 0, 00:07:05.375 "data_size": 0 00:07:05.375 } 00:07:05.375 ] 00:07:05.375 }' 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.375 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 [2024-12-12 09:20:39.704749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.950 [2024-12-12 09:20:39.704848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 [2024-12-12 09:20:39.716706] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.950 [2024-12-12 09:20:39.716790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.950 [2024-12-12 09:20:39.716819] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.950 [2024-12-12 09:20:39.716846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.950 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.951 [2024-12-12 09:20:39.772461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.951 BaseBdev1 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.951 [ 00:07:05.951 { 00:07:05.951 "name": "BaseBdev1", 00:07:05.951 "aliases": [ 00:07:05.951 "3ff2c977-5e3b-478e-9125-979fb0d7d5a3" 00:07:05.951 ], 00:07:05.951 "product_name": "Malloc disk", 00:07:05.951 "block_size": 512, 00:07:05.951 "num_blocks": 65536, 00:07:05.951 "uuid": "3ff2c977-5e3b-478e-9125-979fb0d7d5a3", 00:07:05.951 "assigned_rate_limits": { 00:07:05.951 "rw_ios_per_sec": 0, 00:07:05.951 "rw_mbytes_per_sec": 0, 00:07:05.951 "r_mbytes_per_sec": 0, 00:07:05.951 "w_mbytes_per_sec": 0 00:07:05.951 }, 00:07:05.951 "claimed": true, 00:07:05.951 "claim_type": "exclusive_write", 00:07:05.951 "zoned": false, 00:07:05.951 "supported_io_types": { 00:07:05.951 "read": true, 00:07:05.951 "write": true, 00:07:05.951 "unmap": true, 00:07:05.951 "flush": true, 00:07:05.951 "reset": true, 00:07:05.951 "nvme_admin": false, 00:07:05.951 "nvme_io": false, 00:07:05.951 "nvme_io_md": false, 00:07:05.951 "write_zeroes": true, 00:07:05.951 "zcopy": true, 00:07:05.951 "get_zone_info": false, 00:07:05.951 "zone_management": false, 00:07:05.951 "zone_append": false, 00:07:05.951 "compare": false, 00:07:05.951 "compare_and_write": false, 00:07:05.951 "abort": true, 00:07:05.951 "seek_hole": false, 00:07:05.951 "seek_data": false, 00:07:05.951 "copy": true, 00:07:05.951 "nvme_iov_md": false 00:07:05.951 }, 00:07:05.951 "memory_domains": [ 00:07:05.951 { 00:07:05.951 "dma_device_id": "system", 00:07:05.951 "dma_device_type": 1 00:07:05.951 }, 00:07:05.951 { 00:07:05.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.951 "dma_device_type": 2 00:07:05.951 } 00:07:05.951 ], 00:07:05.951 "driver_specific": {} 00:07:05.951 } 00:07:05.951 ] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.951 "name": "Existed_Raid", 00:07:05.951 "uuid": "a79622f5-8fca-4111-8c65-8ad64004df04", 00:07:05.951 "strip_size_kb": 64, 00:07:05.951 "state": "configuring", 00:07:05.951 "raid_level": "raid0", 00:07:05.951 "superblock": true, 00:07:05.951 "num_base_bdevs": 2, 00:07:05.951 "num_base_bdevs_discovered": 1, 00:07:05.951 "num_base_bdevs_operational": 2, 00:07:05.951 "base_bdevs_list": [ 00:07:05.951 { 00:07:05.951 "name": "BaseBdev1", 00:07:05.951 "uuid": "3ff2c977-5e3b-478e-9125-979fb0d7d5a3", 00:07:05.951 "is_configured": true, 00:07:05.951 "data_offset": 2048, 00:07:05.951 "data_size": 63488 00:07:05.951 }, 00:07:05.951 { 00:07:05.951 "name": "BaseBdev2", 00:07:05.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.951 "is_configured": false, 00:07:05.951 "data_offset": 0, 00:07:05.951 "data_size": 0 00:07:05.951 } 00:07:05.951 ] 00:07:05.951 }' 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.951 09:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.520 [2024-12-12 09:20:40.279773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.520 [2024-12-12 09:20:40.279891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.520 [2024-12-12 09:20:40.291792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.520 [2024-12-12 09:20:40.293980] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.520 [2024-12-12 09:20:40.294023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.520 "name": "Existed_Raid", 00:07:06.520 "uuid": "241b18d8-d21b-4678-b366-54defe1c50f8", 00:07:06.520 "strip_size_kb": 64, 00:07:06.520 "state": "configuring", 00:07:06.520 "raid_level": "raid0", 00:07:06.520 "superblock": true, 00:07:06.520 "num_base_bdevs": 2, 00:07:06.520 "num_base_bdevs_discovered": 1, 00:07:06.520 "num_base_bdevs_operational": 2, 00:07:06.520 "base_bdevs_list": [ 00:07:06.520 { 00:07:06.520 "name": "BaseBdev1", 00:07:06.520 "uuid": "3ff2c977-5e3b-478e-9125-979fb0d7d5a3", 00:07:06.520 "is_configured": true, 00:07:06.520 "data_offset": 2048, 00:07:06.520 "data_size": 63488 00:07:06.520 }, 00:07:06.520 { 00:07:06.520 "name": "BaseBdev2", 00:07:06.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.520 "is_configured": false, 00:07:06.520 "data_offset": 0, 00:07:06.520 "data_size": 0 00:07:06.520 } 00:07:06.520 ] 00:07:06.520 }' 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.520 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 [2024-12-12 09:20:40.755892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.780 [2024-12-12 09:20:40.756284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:06.780 [2024-12-12 09:20:40.756339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.780 [2024-12-12 09:20:40.756638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:06.780 BaseBdev2 00:07:06.780 [2024-12-12 09:20:40.756850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:06.780 [2024-12-12 09:20:40.756868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:06.780 [2024-12-12 09:20:40.757029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 [ 00:07:06.780 { 00:07:06.780 "name": "BaseBdev2", 00:07:06.780 "aliases": [ 00:07:06.780 "9729192b-4463-4a1b-8e11-93ac68b031af" 00:07:06.780 ], 00:07:06.780 "product_name": "Malloc disk", 00:07:06.780 "block_size": 512, 00:07:06.780 "num_blocks": 65536, 00:07:06.780 "uuid": "9729192b-4463-4a1b-8e11-93ac68b031af", 00:07:06.780 "assigned_rate_limits": { 00:07:06.780 "rw_ios_per_sec": 0, 00:07:06.780 "rw_mbytes_per_sec": 0, 00:07:06.780 "r_mbytes_per_sec": 0, 00:07:06.780 "w_mbytes_per_sec": 0 00:07:06.780 }, 00:07:06.780 "claimed": true, 00:07:06.780 "claim_type": "exclusive_write", 00:07:06.780 "zoned": false, 00:07:06.780 "supported_io_types": { 00:07:06.780 "read": true, 00:07:06.780 "write": true, 00:07:06.780 "unmap": true, 00:07:06.780 "flush": true, 00:07:06.780 "reset": true, 00:07:06.780 "nvme_admin": false, 00:07:06.780 "nvme_io": false, 00:07:06.780 "nvme_io_md": false, 00:07:06.780 "write_zeroes": true, 00:07:06.780 "zcopy": true, 00:07:06.780 "get_zone_info": false, 00:07:06.780 "zone_management": false, 00:07:06.780 "zone_append": false, 00:07:06.780 "compare": false, 00:07:06.780 "compare_and_write": false, 00:07:06.780 "abort": true, 00:07:06.780 "seek_hole": false, 00:07:06.780 "seek_data": false, 00:07:06.780 "copy": true, 00:07:06.780 "nvme_iov_md": false 00:07:06.780 }, 00:07:06.780 "memory_domains": [ 00:07:06.780 { 00:07:06.780 "dma_device_id": "system", 00:07:06.780 "dma_device_type": 1 00:07:06.780 }, 00:07:06.780 { 00:07:06.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.780 "dma_device_type": 2 00:07:06.780 } 00:07:06.780 ], 00:07:06.780 "driver_specific": {} 00:07:06.780 } 00:07:06.780 ] 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.780 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.040 "name": "Existed_Raid", 00:07:07.040 "uuid": "241b18d8-d21b-4678-b366-54defe1c50f8", 00:07:07.040 "strip_size_kb": 64, 00:07:07.040 "state": "online", 00:07:07.040 "raid_level": "raid0", 00:07:07.040 "superblock": true, 00:07:07.040 "num_base_bdevs": 2, 00:07:07.040 "num_base_bdevs_discovered": 2, 00:07:07.040 "num_base_bdevs_operational": 2, 00:07:07.040 "base_bdevs_list": [ 00:07:07.040 { 00:07:07.040 "name": "BaseBdev1", 00:07:07.040 "uuid": "3ff2c977-5e3b-478e-9125-979fb0d7d5a3", 00:07:07.040 "is_configured": true, 00:07:07.040 "data_offset": 2048, 00:07:07.040 "data_size": 63488 00:07:07.040 }, 00:07:07.040 { 00:07:07.040 "name": "BaseBdev2", 00:07:07.040 "uuid": "9729192b-4463-4a1b-8e11-93ac68b031af", 00:07:07.040 "is_configured": true, 00:07:07.040 "data_offset": 2048, 00:07:07.040 "data_size": 63488 00:07:07.040 } 00:07:07.040 ] 00:07:07.040 }' 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.040 09:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.299 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.300 [2024-12-12 09:20:41.255427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.300 "name": "Existed_Raid", 00:07:07.300 "aliases": [ 00:07:07.300 "241b18d8-d21b-4678-b366-54defe1c50f8" 00:07:07.300 ], 00:07:07.300 "product_name": "Raid Volume", 00:07:07.300 "block_size": 512, 00:07:07.300 "num_blocks": 126976, 00:07:07.300 "uuid": "241b18d8-d21b-4678-b366-54defe1c50f8", 00:07:07.300 "assigned_rate_limits": { 00:07:07.300 "rw_ios_per_sec": 0, 00:07:07.300 "rw_mbytes_per_sec": 0, 00:07:07.300 "r_mbytes_per_sec": 0, 00:07:07.300 "w_mbytes_per_sec": 0 00:07:07.300 }, 00:07:07.300 "claimed": false, 00:07:07.300 "zoned": false, 00:07:07.300 "supported_io_types": { 00:07:07.300 "read": true, 00:07:07.300 "write": true, 00:07:07.300 "unmap": true, 00:07:07.300 "flush": true, 00:07:07.300 "reset": true, 00:07:07.300 "nvme_admin": false, 00:07:07.300 "nvme_io": false, 00:07:07.300 "nvme_io_md": false, 00:07:07.300 "write_zeroes": true, 00:07:07.300 "zcopy": false, 00:07:07.300 "get_zone_info": false, 00:07:07.300 "zone_management": false, 00:07:07.300 "zone_append": false, 00:07:07.300 "compare": false, 00:07:07.300 "compare_and_write": false, 00:07:07.300 "abort": false, 00:07:07.300 "seek_hole": false, 00:07:07.300 "seek_data": false, 00:07:07.300 "copy": false, 00:07:07.300 "nvme_iov_md": false 00:07:07.300 }, 00:07:07.300 "memory_domains": [ 00:07:07.300 { 00:07:07.300 "dma_device_id": "system", 00:07:07.300 "dma_device_type": 1 00:07:07.300 }, 00:07:07.300 { 00:07:07.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.300 "dma_device_type": 2 00:07:07.300 }, 00:07:07.300 { 00:07:07.300 "dma_device_id": "system", 00:07:07.300 "dma_device_type": 1 00:07:07.300 }, 00:07:07.300 { 00:07:07.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.300 "dma_device_type": 2 00:07:07.300 } 00:07:07.300 ], 00:07:07.300 "driver_specific": { 00:07:07.300 "raid": { 00:07:07.300 "uuid": "241b18d8-d21b-4678-b366-54defe1c50f8", 00:07:07.300 "strip_size_kb": 64, 00:07:07.300 "state": "online", 00:07:07.300 "raid_level": "raid0", 00:07:07.300 "superblock": true, 00:07:07.300 "num_base_bdevs": 2, 00:07:07.300 "num_base_bdevs_discovered": 2, 00:07:07.300 "num_base_bdevs_operational": 2, 00:07:07.300 "base_bdevs_list": [ 00:07:07.300 { 00:07:07.300 "name": "BaseBdev1", 00:07:07.300 "uuid": "3ff2c977-5e3b-478e-9125-979fb0d7d5a3", 00:07:07.300 "is_configured": true, 00:07:07.300 "data_offset": 2048, 00:07:07.300 "data_size": 63488 00:07:07.300 }, 00:07:07.300 { 00:07:07.300 "name": "BaseBdev2", 00:07:07.300 "uuid": "9729192b-4463-4a1b-8e11-93ac68b031af", 00:07:07.300 "is_configured": true, 00:07:07.300 "data_offset": 2048, 00:07:07.300 "data_size": 63488 00:07:07.300 } 00:07:07.300 ] 00:07:07.300 } 00:07:07.300 } 00:07:07.300 }' 00:07:07.300 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:07.560 BaseBdev2' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 [2024-12-12 09:20:41.454930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:07.560 [2024-12-12 09:20:41.454998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.560 [2024-12-12 09:20:41.455063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.560 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.820 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.820 "name": "Existed_Raid", 00:07:07.820 "uuid": "241b18d8-d21b-4678-b366-54defe1c50f8", 00:07:07.820 "strip_size_kb": 64, 00:07:07.820 "state": "offline", 00:07:07.820 "raid_level": "raid0", 00:07:07.820 "superblock": true, 00:07:07.820 "num_base_bdevs": 2, 00:07:07.820 "num_base_bdevs_discovered": 1, 00:07:07.820 "num_base_bdevs_operational": 1, 00:07:07.820 "base_bdevs_list": [ 00:07:07.820 { 00:07:07.820 "name": null, 00:07:07.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.820 "is_configured": false, 00:07:07.820 "data_offset": 0, 00:07:07.820 "data_size": 63488 00:07:07.820 }, 00:07:07.820 { 00:07:07.820 "name": "BaseBdev2", 00:07:07.820 "uuid": "9729192b-4463-4a1b-8e11-93ac68b031af", 00:07:07.820 "is_configured": true, 00:07:07.820 "data_offset": 2048, 00:07:07.820 "data_size": 63488 00:07:07.820 } 00:07:07.820 ] 00:07:07.820 }' 00:07:07.820 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.820 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.080 09:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.080 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 [2024-12-12 09:20:42.047144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.080 [2024-12-12 09:20:42.047211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62103 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62103 ']' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62103 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62103 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.339 killing process with pid 62103 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62103' 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62103 00:07:08.339 [2024-12-12 09:20:42.234477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.339 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62103 00:07:08.339 [2024-12-12 09:20:42.252785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.771 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:09.771 00:07:09.771 real 0m5.125s 00:07:09.771 user 0m7.211s 00:07:09.771 sys 0m0.904s 00:07:09.771 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.771 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.771 ************************************ 00:07:09.771 END TEST raid_state_function_test_sb 00:07:09.771 ************************************ 00:07:09.771 09:20:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:09.771 09:20:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:09.771 09:20:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.771 09:20:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.771 ************************************ 00:07:09.771 START TEST raid_superblock_test 00:07:09.771 ************************************ 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:09.771 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62354 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62354 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62354 ']' 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.772 09:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.772 [2024-12-12 09:20:43.648992] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:09.772 [2024-12-12 09:20:43.649135] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62354 ] 00:07:10.031 [2024-12-12 09:20:43.830182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.031 [2024-12-12 09:20:43.971365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.290 [2024-12-12 09:20:44.210844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.290 [2024-12-12 09:20:44.210899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.550 malloc1 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.550 [2024-12-12 09:20:44.521854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.550 [2024-12-12 09:20:44.521982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.550 [2024-12-12 09:20:44.522025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:10.550 [2024-12-12 09:20:44.522054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.550 [2024-12-12 09:20:44.524438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.550 [2024-12-12 09:20:44.524508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.550 pt1 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.550 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.811 malloc2 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.811 [2024-12-12 09:20:44.586785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:10.811 [2024-12-12 09:20:44.586842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.811 [2024-12-12 09:20:44.586866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:10.811 [2024-12-12 09:20:44.586875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.811 [2024-12-12 09:20:44.589327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.811 [2024-12-12 09:20:44.589357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:10.811 pt2 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.811 [2024-12-12 09:20:44.598804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.811 [2024-12-12 09:20:44.600973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:10.811 [2024-12-12 09:20:44.601233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:10.811 [2024-12-12 09:20:44.601251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.811 [2024-12-12 09:20:44.601497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.811 [2024-12-12 09:20:44.601646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:10.811 [2024-12-12 09:20:44.601657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:10.811 [2024-12-12 09:20:44.601806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.811 "name": "raid_bdev1", 00:07:10.811 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:10.811 "strip_size_kb": 64, 00:07:10.811 "state": "online", 00:07:10.811 "raid_level": "raid0", 00:07:10.811 "superblock": true, 00:07:10.811 "num_base_bdevs": 2, 00:07:10.811 "num_base_bdevs_discovered": 2, 00:07:10.811 "num_base_bdevs_operational": 2, 00:07:10.811 "base_bdevs_list": [ 00:07:10.811 { 00:07:10.811 "name": "pt1", 00:07:10.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.811 "is_configured": true, 00:07:10.811 "data_offset": 2048, 00:07:10.811 "data_size": 63488 00:07:10.811 }, 00:07:10.811 { 00:07:10.811 "name": "pt2", 00:07:10.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.811 "is_configured": true, 00:07:10.811 "data_offset": 2048, 00:07:10.811 "data_size": 63488 00:07:10.811 } 00:07:10.811 ] 00:07:10.811 }' 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.811 09:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.070 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 [2024-12-12 09:20:45.078242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.331 "name": "raid_bdev1", 00:07:11.331 "aliases": [ 00:07:11.331 "580883d1-5267-46cf-b49b-69e92e571823" 00:07:11.331 ], 00:07:11.331 "product_name": "Raid Volume", 00:07:11.331 "block_size": 512, 00:07:11.331 "num_blocks": 126976, 00:07:11.331 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:11.331 "assigned_rate_limits": { 00:07:11.331 "rw_ios_per_sec": 0, 00:07:11.331 "rw_mbytes_per_sec": 0, 00:07:11.331 "r_mbytes_per_sec": 0, 00:07:11.331 "w_mbytes_per_sec": 0 00:07:11.331 }, 00:07:11.331 "claimed": false, 00:07:11.331 "zoned": false, 00:07:11.331 "supported_io_types": { 00:07:11.331 "read": true, 00:07:11.331 "write": true, 00:07:11.331 "unmap": true, 00:07:11.331 "flush": true, 00:07:11.331 "reset": true, 00:07:11.331 "nvme_admin": false, 00:07:11.331 "nvme_io": false, 00:07:11.331 "nvme_io_md": false, 00:07:11.331 "write_zeroes": true, 00:07:11.331 "zcopy": false, 00:07:11.331 "get_zone_info": false, 00:07:11.331 "zone_management": false, 00:07:11.331 "zone_append": false, 00:07:11.331 "compare": false, 00:07:11.331 "compare_and_write": false, 00:07:11.331 "abort": false, 00:07:11.331 "seek_hole": false, 00:07:11.331 "seek_data": false, 00:07:11.331 "copy": false, 00:07:11.331 "nvme_iov_md": false 00:07:11.331 }, 00:07:11.331 "memory_domains": [ 00:07:11.331 { 00:07:11.331 "dma_device_id": "system", 00:07:11.331 "dma_device_type": 1 00:07:11.331 }, 00:07:11.331 { 00:07:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.331 "dma_device_type": 2 00:07:11.331 }, 00:07:11.331 { 00:07:11.331 "dma_device_id": "system", 00:07:11.331 "dma_device_type": 1 00:07:11.331 }, 00:07:11.331 { 00:07:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.331 "dma_device_type": 2 00:07:11.331 } 00:07:11.331 ], 00:07:11.331 "driver_specific": { 00:07:11.331 "raid": { 00:07:11.331 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:11.331 "strip_size_kb": 64, 00:07:11.331 "state": "online", 00:07:11.331 "raid_level": "raid0", 00:07:11.331 "superblock": true, 00:07:11.331 "num_base_bdevs": 2, 00:07:11.331 "num_base_bdevs_discovered": 2, 00:07:11.331 "num_base_bdevs_operational": 2, 00:07:11.331 "base_bdevs_list": [ 00:07:11.331 { 00:07:11.331 "name": "pt1", 00:07:11.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.331 "is_configured": true, 00:07:11.331 "data_offset": 2048, 00:07:11.331 "data_size": 63488 00:07:11.331 }, 00:07:11.331 { 00:07:11.331 "name": "pt2", 00:07:11.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.331 "is_configured": true, 00:07:11.331 "data_offset": 2048, 00:07:11.331 "data_size": 63488 00:07:11.331 } 00:07:11.331 ] 00:07:11.331 } 00:07:11.331 } 00:07:11.331 }' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:11.331 pt2' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.331 [2024-12-12 09:20:45.289812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=580883d1-5267-46cf-b49b-69e92e571823 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 580883d1-5267-46cf-b49b-69e92e571823 ']' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.331 [2024-12-12 09:20:45.337472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.331 [2024-12-12 09:20:45.337494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.331 [2024-12-12 09:20:45.337575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.331 [2024-12-12 09:20:45.337621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.331 [2024-12-12 09:20:45.337635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.331 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 [2024-12-12 09:20:45.473283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:11.592 [2024-12-12 09:20:45.475370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:11.592 [2024-12-12 09:20:45.475435] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:11.592 [2024-12-12 09:20:45.475481] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:11.592 [2024-12-12 09:20:45.475495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.592 [2024-12-12 09:20:45.475506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:11.592 request: 00:07:11.592 { 00:07:11.592 "name": "raid_bdev1", 00:07:11.592 "raid_level": "raid0", 00:07:11.592 "base_bdevs": [ 00:07:11.592 "malloc1", 00:07:11.592 "malloc2" 00:07:11.592 ], 00:07:11.592 "strip_size_kb": 64, 00:07:11.592 "superblock": false, 00:07:11.592 "method": "bdev_raid_create", 00:07:11.592 "req_id": 1 00:07:11.592 } 00:07:11.592 Got JSON-RPC error response 00:07:11.592 response: 00:07:11.592 { 00:07:11.592 "code": -17, 00:07:11.592 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:11.592 } 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.592 [2024-12-12 09:20:45.541138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:11.592 [2024-12-12 09:20:45.541229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.592 [2024-12-12 09:20:45.541260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:11.592 [2024-12-12 09:20:45.541289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.592 [2024-12-12 09:20:45.543724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.592 [2024-12-12 09:20:45.543807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:11.592 [2024-12-12 09:20:45.543903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:11.592 [2024-12-12 09:20:45.543985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:11.592 pt1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.592 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.593 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.593 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.593 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.593 "name": "raid_bdev1", 00:07:11.593 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:11.593 "strip_size_kb": 64, 00:07:11.593 "state": "configuring", 00:07:11.593 "raid_level": "raid0", 00:07:11.593 "superblock": true, 00:07:11.593 "num_base_bdevs": 2, 00:07:11.593 "num_base_bdevs_discovered": 1, 00:07:11.593 "num_base_bdevs_operational": 2, 00:07:11.593 "base_bdevs_list": [ 00:07:11.593 { 00:07:11.593 "name": "pt1", 00:07:11.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.593 "is_configured": true, 00:07:11.593 "data_offset": 2048, 00:07:11.593 "data_size": 63488 00:07:11.593 }, 00:07:11.593 { 00:07:11.593 "name": null, 00:07:11.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.593 "is_configured": false, 00:07:11.593 "data_offset": 2048, 00:07:11.593 "data_size": 63488 00:07:11.593 } 00:07:11.593 ] 00:07:11.593 }' 00:07:11.593 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.593 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.162 [2024-12-12 09:20:45.980480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.162 [2024-12-12 09:20:45.980635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.162 [2024-12-12 09:20:45.980665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:12.162 [2024-12-12 09:20:45.980678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.162 [2024-12-12 09:20:45.981327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.162 [2024-12-12 09:20:45.981354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.162 [2024-12-12 09:20:45.981457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:12.162 [2024-12-12 09:20:45.981492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.162 [2024-12-12 09:20:45.981651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.162 [2024-12-12 09:20:45.981665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.162 [2024-12-12 09:20:45.981958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:12.162 [2024-12-12 09:20:45.982167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.162 [2024-12-12 09:20:45.982177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.162 [2024-12-12 09:20:45.982341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.162 pt2 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.162 09:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.162 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.162 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.162 "name": "raid_bdev1", 00:07:12.162 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:12.162 "strip_size_kb": 64, 00:07:12.162 "state": "online", 00:07:12.162 "raid_level": "raid0", 00:07:12.162 "superblock": true, 00:07:12.162 "num_base_bdevs": 2, 00:07:12.162 "num_base_bdevs_discovered": 2, 00:07:12.162 "num_base_bdevs_operational": 2, 00:07:12.162 "base_bdevs_list": [ 00:07:12.162 { 00:07:12.162 "name": "pt1", 00:07:12.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.162 "is_configured": true, 00:07:12.162 "data_offset": 2048, 00:07:12.162 "data_size": 63488 00:07:12.162 }, 00:07:12.162 { 00:07:12.162 "name": "pt2", 00:07:12.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.162 "is_configured": true, 00:07:12.162 "data_offset": 2048, 00:07:12.162 "data_size": 63488 00:07:12.162 } 00:07:12.162 ] 00:07:12.162 }' 00:07:12.162 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.162 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.421 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.422 [2024-12-12 09:20:46.420156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.422 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.422 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.422 "name": "raid_bdev1", 00:07:12.422 "aliases": [ 00:07:12.422 "580883d1-5267-46cf-b49b-69e92e571823" 00:07:12.422 ], 00:07:12.422 "product_name": "Raid Volume", 00:07:12.422 "block_size": 512, 00:07:12.422 "num_blocks": 126976, 00:07:12.422 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:12.422 "assigned_rate_limits": { 00:07:12.422 "rw_ios_per_sec": 0, 00:07:12.422 "rw_mbytes_per_sec": 0, 00:07:12.422 "r_mbytes_per_sec": 0, 00:07:12.422 "w_mbytes_per_sec": 0 00:07:12.422 }, 00:07:12.422 "claimed": false, 00:07:12.422 "zoned": false, 00:07:12.422 "supported_io_types": { 00:07:12.422 "read": true, 00:07:12.422 "write": true, 00:07:12.422 "unmap": true, 00:07:12.422 "flush": true, 00:07:12.422 "reset": true, 00:07:12.422 "nvme_admin": false, 00:07:12.422 "nvme_io": false, 00:07:12.422 "nvme_io_md": false, 00:07:12.422 "write_zeroes": true, 00:07:12.422 "zcopy": false, 00:07:12.422 "get_zone_info": false, 00:07:12.422 "zone_management": false, 00:07:12.422 "zone_append": false, 00:07:12.422 "compare": false, 00:07:12.422 "compare_and_write": false, 00:07:12.422 "abort": false, 00:07:12.422 "seek_hole": false, 00:07:12.422 "seek_data": false, 00:07:12.422 "copy": false, 00:07:12.422 "nvme_iov_md": false 00:07:12.422 }, 00:07:12.422 "memory_domains": [ 00:07:12.422 { 00:07:12.422 "dma_device_id": "system", 00:07:12.422 "dma_device_type": 1 00:07:12.422 }, 00:07:12.422 { 00:07:12.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.422 "dma_device_type": 2 00:07:12.422 }, 00:07:12.422 { 00:07:12.422 "dma_device_id": "system", 00:07:12.422 "dma_device_type": 1 00:07:12.422 }, 00:07:12.422 { 00:07:12.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.422 "dma_device_type": 2 00:07:12.422 } 00:07:12.422 ], 00:07:12.422 "driver_specific": { 00:07:12.422 "raid": { 00:07:12.422 "uuid": "580883d1-5267-46cf-b49b-69e92e571823", 00:07:12.422 "strip_size_kb": 64, 00:07:12.422 "state": "online", 00:07:12.422 "raid_level": "raid0", 00:07:12.422 "superblock": true, 00:07:12.422 "num_base_bdevs": 2, 00:07:12.422 "num_base_bdevs_discovered": 2, 00:07:12.422 "num_base_bdevs_operational": 2, 00:07:12.422 "base_bdevs_list": [ 00:07:12.422 { 00:07:12.422 "name": "pt1", 00:07:12.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.422 "is_configured": true, 00:07:12.422 "data_offset": 2048, 00:07:12.422 "data_size": 63488 00:07:12.422 }, 00:07:12.422 { 00:07:12.422 "name": "pt2", 00:07:12.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.422 "is_configured": true, 00:07:12.422 "data_offset": 2048, 00:07:12.422 "data_size": 63488 00:07:12.422 } 00:07:12.422 ] 00:07:12.422 } 00:07:12.422 } 00:07:12.422 }' 00:07:12.422 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:12.681 pt2' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.681 [2024-12-12 09:20:46.656036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 580883d1-5267-46cf-b49b-69e92e571823 '!=' 580883d1-5267-46cf-b49b-69e92e571823 ']' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62354 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62354 ']' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62354 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.681 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62354 00:07:12.941 killing process with pid 62354 00:07:12.941 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.941 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.941 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62354' 00:07:12.941 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62354 00:07:12.941 [2024-12-12 09:20:46.734575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.941 [2024-12-12 09:20:46.734663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.941 [2024-12-12 09:20:46.734717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.941 [2024-12-12 09:20:46.734730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.941 09:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62354 00:07:12.941 [2024-12-12 09:20:46.960224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.327 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:14.327 00:07:14.327 real 0m4.642s 00:07:14.327 user 0m6.347s 00:07:14.327 sys 0m0.865s 00:07:14.327 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.327 ************************************ 00:07:14.327 END TEST raid_superblock_test 00:07:14.327 ************************************ 00:07:14.327 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.327 09:20:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:14.327 09:20:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.328 09:20:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.328 09:20:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.328 ************************************ 00:07:14.328 START TEST raid_read_error_test 00:07:14.328 ************************************ 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EC3MVkKQ89 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62567 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62567 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62567 ']' 00:07:14.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.328 09:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.587 [2024-12-12 09:20:48.365083] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:14.587 [2024-12-12 09:20:48.365318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62567 ] 00:07:14.587 [2024-12-12 09:20:48.542695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.846 [2024-12-12 09:20:48.678862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.106 [2024-12-12 09:20:48.917419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.106 [2024-12-12 09:20:48.917548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 BaseBdev1_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 true 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 [2024-12-12 09:20:49.282113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:15.366 [2024-12-12 09:20:49.282175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.366 [2024-12-12 09:20:49.282198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:15.366 [2024-12-12 09:20:49.282210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.366 [2024-12-12 09:20:49.284647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.366 [2024-12-12 09:20:49.284691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:15.366 BaseBdev1 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 BaseBdev2_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 true 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 [2024-12-12 09:20:49.353031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:15.366 [2024-12-12 09:20:49.353094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.366 [2024-12-12 09:20:49.353113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:15.366 [2024-12-12 09:20:49.353125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.366 [2024-12-12 09:20:49.355529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.366 [2024-12-12 09:20:49.355567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:15.366 BaseBdev2 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.366 [2024-12-12 09:20:49.365077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.366 [2024-12-12 09:20:49.367165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.366 [2024-12-12 09:20:49.367376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.366 [2024-12-12 09:20:49.367394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.366 [2024-12-12 09:20:49.367622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:15.366 [2024-12-12 09:20:49.367814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.366 [2024-12-12 09:20:49.367828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:15.366 [2024-12-12 09:20:49.368001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.366 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.626 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.626 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.626 "name": "raid_bdev1", 00:07:15.626 "uuid": "a7b04668-e60d-44b6-a8f3-6c560c91b1de", 00:07:15.626 "strip_size_kb": 64, 00:07:15.626 "state": "online", 00:07:15.626 "raid_level": "raid0", 00:07:15.626 "superblock": true, 00:07:15.626 "num_base_bdevs": 2, 00:07:15.626 "num_base_bdevs_discovered": 2, 00:07:15.626 "num_base_bdevs_operational": 2, 00:07:15.626 "base_bdevs_list": [ 00:07:15.626 { 00:07:15.626 "name": "BaseBdev1", 00:07:15.626 "uuid": "9364be74-dc4b-5603-80b7-337926a3a9fe", 00:07:15.626 "is_configured": true, 00:07:15.626 "data_offset": 2048, 00:07:15.626 "data_size": 63488 00:07:15.626 }, 00:07:15.626 { 00:07:15.626 "name": "BaseBdev2", 00:07:15.626 "uuid": "0986fa35-a45e-5380-ae82-2c0b5dde5c64", 00:07:15.626 "is_configured": true, 00:07:15.626 "data_offset": 2048, 00:07:15.626 "data_size": 63488 00:07:15.626 } 00:07:15.626 ] 00:07:15.626 }' 00:07:15.626 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.626 09:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.885 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:15.885 09:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:15.885 [2024-12-12 09:20:49.885642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.083 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.083 "name": "raid_bdev1", 00:07:17.083 "uuid": "a7b04668-e60d-44b6-a8f3-6c560c91b1de", 00:07:17.083 "strip_size_kb": 64, 00:07:17.083 "state": "online", 00:07:17.083 "raid_level": "raid0", 00:07:17.083 "superblock": true, 00:07:17.083 "num_base_bdevs": 2, 00:07:17.083 "num_base_bdevs_discovered": 2, 00:07:17.083 "num_base_bdevs_operational": 2, 00:07:17.083 "base_bdevs_list": [ 00:07:17.083 { 00:07:17.083 "name": "BaseBdev1", 00:07:17.083 "uuid": "9364be74-dc4b-5603-80b7-337926a3a9fe", 00:07:17.083 "is_configured": true, 00:07:17.083 "data_offset": 2048, 00:07:17.083 "data_size": 63488 00:07:17.083 }, 00:07:17.083 { 00:07:17.083 "name": "BaseBdev2", 00:07:17.083 "uuid": "0986fa35-a45e-5380-ae82-2c0b5dde5c64", 00:07:17.083 "is_configured": true, 00:07:17.083 "data_offset": 2048, 00:07:17.083 "data_size": 63488 00:07:17.083 } 00:07:17.083 ] 00:07:17.083 }' 00:07:17.083 09:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.083 09:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.343 [2024-12-12 09:20:51.274242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.343 [2024-12-12 09:20:51.274381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.343 [2024-12-12 09:20:51.277105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.343 [2024-12-12 09:20:51.277193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.343 [2024-12-12 09:20:51.277264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.343 [2024-12-12 09:20:51.277314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:17.343 { 00:07:17.343 "results": [ 00:07:17.343 { 00:07:17.343 "job": "raid_bdev1", 00:07:17.343 "core_mask": "0x1", 00:07:17.343 "workload": "randrw", 00:07:17.343 "percentage": 50, 00:07:17.343 "status": "finished", 00:07:17.343 "queue_depth": 1, 00:07:17.343 "io_size": 131072, 00:07:17.343 "runtime": 1.389464, 00:07:17.343 "iops": 13896.725643845397, 00:07:17.343 "mibps": 1737.0907054806746, 00:07:17.343 "io_failed": 1, 00:07:17.343 "io_timeout": 0, 00:07:17.343 "avg_latency_us": 100.82350941544419, 00:07:17.343 "min_latency_us": 25.4882096069869, 00:07:17.343 "max_latency_us": 1416.6078602620087 00:07:17.343 } 00:07:17.343 ], 00:07:17.343 "core_count": 1 00:07:17.343 } 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62567 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62567 ']' 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62567 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62567 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62567' 00:07:17.343 killing process with pid 62567 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62567 00:07:17.343 [2024-12-12 09:20:51.324532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.343 09:20:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62567 00:07:17.603 [2024-12-12 09:20:51.472703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EC3MVkKQ89 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:18.982 00:07:18.982 real 0m4.509s 00:07:18.982 user 0m5.285s 00:07:18.982 sys 0m0.658s 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.982 ************************************ 00:07:18.982 END TEST raid_read_error_test 00:07:18.982 ************************************ 00:07:18.982 09:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.982 09:20:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:18.982 09:20:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:18.982 09:20:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.982 09:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.982 ************************************ 00:07:18.982 START TEST raid_write_error_test 00:07:18.982 ************************************ 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P2N0mLBuyG 00:07:18.982 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62707 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62707 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62707 ']' 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.983 09:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.983 [2024-12-12 09:20:52.942935] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:18.983 [2024-12-12 09:20:52.943069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62707 ] 00:07:19.241 [2024-12-12 09:20:53.121029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.241 [2024-12-12 09:20:53.264605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.500 [2024-12-12 09:20:53.498319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.500 [2024-12-12 09:20:53.498512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.759 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.018 BaseBdev1_malloc 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.018 true 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.018 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 [2024-12-12 09:20:53.827059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:20.019 [2024-12-12 09:20:53.827122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.019 [2024-12-12 09:20:53.827145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:20.019 [2024-12-12 09:20:53.827157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.019 [2024-12-12 09:20:53.829516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.019 [2024-12-12 09:20:53.829555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:20.019 BaseBdev1 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 BaseBdev2_malloc 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 true 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 [2024-12-12 09:20:53.900764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:20.019 [2024-12-12 09:20:53.900828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.019 [2024-12-12 09:20:53.900846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:20.019 [2024-12-12 09:20:53.900858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.019 [2024-12-12 09:20:53.903310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.019 [2024-12-12 09:20:53.903348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:20.019 BaseBdev2 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 [2024-12-12 09:20:53.912794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.019 [2024-12-12 09:20:53.914945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.019 [2024-12-12 09:20:53.915148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.019 [2024-12-12 09:20:53.915167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.019 [2024-12-12 09:20:53.915417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:20.019 [2024-12-12 09:20:53.915610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.019 [2024-12-12 09:20:53.915623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:20.019 [2024-12-12 09:20:53.915799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.019 "name": "raid_bdev1", 00:07:20.019 "uuid": "4e2d522a-3d94-4dc0-80a8-d0810c2cd936", 00:07:20.019 "strip_size_kb": 64, 00:07:20.019 "state": "online", 00:07:20.019 "raid_level": "raid0", 00:07:20.019 "superblock": true, 00:07:20.019 "num_base_bdevs": 2, 00:07:20.019 "num_base_bdevs_discovered": 2, 00:07:20.019 "num_base_bdevs_operational": 2, 00:07:20.019 "base_bdevs_list": [ 00:07:20.019 { 00:07:20.019 "name": "BaseBdev1", 00:07:20.019 "uuid": "5967e03e-5f65-50d0-819b-eb97dce329ca", 00:07:20.019 "is_configured": true, 00:07:20.019 "data_offset": 2048, 00:07:20.019 "data_size": 63488 00:07:20.019 }, 00:07:20.019 { 00:07:20.019 "name": "BaseBdev2", 00:07:20.019 "uuid": "756ab77b-739e-5160-8c9c-80bc1541c08d", 00:07:20.019 "is_configured": true, 00:07:20.019 "data_offset": 2048, 00:07:20.019 "data_size": 63488 00:07:20.019 } 00:07:20.019 ] 00:07:20.019 }' 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.019 09:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.587 09:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:20.587 09:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:20.587 [2024-12-12 09:20:54.453395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:21.524 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.525 "name": "raid_bdev1", 00:07:21.525 "uuid": "4e2d522a-3d94-4dc0-80a8-d0810c2cd936", 00:07:21.525 "strip_size_kb": 64, 00:07:21.525 "state": "online", 00:07:21.525 "raid_level": "raid0", 00:07:21.525 "superblock": true, 00:07:21.525 "num_base_bdevs": 2, 00:07:21.525 "num_base_bdevs_discovered": 2, 00:07:21.525 "num_base_bdevs_operational": 2, 00:07:21.525 "base_bdevs_list": [ 00:07:21.525 { 00:07:21.525 "name": "BaseBdev1", 00:07:21.525 "uuid": "5967e03e-5f65-50d0-819b-eb97dce329ca", 00:07:21.525 "is_configured": true, 00:07:21.525 "data_offset": 2048, 00:07:21.525 "data_size": 63488 00:07:21.525 }, 00:07:21.525 { 00:07:21.525 "name": "BaseBdev2", 00:07:21.525 "uuid": "756ab77b-739e-5160-8c9c-80bc1541c08d", 00:07:21.525 "is_configured": true, 00:07:21.525 "data_offset": 2048, 00:07:21.525 "data_size": 63488 00:07:21.525 } 00:07:21.525 ] 00:07:21.525 }' 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.525 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 [2024-12-12 09:20:55.773506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.783 [2024-12-12 09:20:55.773658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.783 [2024-12-12 09:20:55.776387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.783 [2024-12-12 09:20:55.776474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.783 [2024-12-12 09:20:55.776529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.783 [2024-12-12 09:20:55.776572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:21.783 { 00:07:21.783 "results": [ 00:07:21.783 { 00:07:21.783 "job": "raid_bdev1", 00:07:21.783 "core_mask": "0x1", 00:07:21.783 "workload": "randrw", 00:07:21.783 "percentage": 50, 00:07:21.783 "status": "finished", 00:07:21.783 "queue_depth": 1, 00:07:21.783 "io_size": 131072, 00:07:21.783 "runtime": 1.320752, 00:07:21.783 "iops": 14023.07170460465, 00:07:21.783 "mibps": 1752.8839630755813, 00:07:21.783 "io_failed": 1, 00:07:21.783 "io_timeout": 0, 00:07:21.783 "avg_latency_us": 100.1090755287351, 00:07:21.783 "min_latency_us": 25.9353711790393, 00:07:21.783 "max_latency_us": 1402.2986899563318 00:07:21.783 } 00:07:21.783 ], 00:07:21.783 "core_count": 1 00:07:21.783 } 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62707 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62707 ']' 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62707 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.783 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62707 00:07:22.042 killing process with pid 62707 00:07:22.042 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.042 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.042 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62707' 00:07:22.042 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62707 00:07:22.042 [2024-12-12 09:20:55.814671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.042 09:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62707 00:07:22.042 [2024-12-12 09:20:55.956345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P2N0mLBuyG 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:07:23.431 00:07:23.431 real 0m4.386s 00:07:23.431 user 0m5.081s 00:07:23.431 sys 0m0.633s 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.431 09:20:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.431 ************************************ 00:07:23.431 END TEST raid_write_error_test 00:07:23.431 ************************************ 00:07:23.431 09:20:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:23.431 09:20:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:23.431 09:20:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.431 09:20:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.431 09:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.431 ************************************ 00:07:23.431 START TEST raid_state_function_test 00:07:23.431 ************************************ 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:23.431 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:23.431 Process raid pid: 62851 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62851 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62851' 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62851 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62851 ']' 00:07:23.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.432 09:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 [2024-12-12 09:20:57.388993] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:23.432 [2024-12-12 09:20:57.389529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.690 [2024-12-12 09:20:57.562632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.690 [2024-12-12 09:20:57.701036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.949 [2024-12-12 09:20:57.935357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.949 [2024-12-12 09:20:57.935415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.207 [2024-12-12 09:20:58.210361] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.207 [2024-12-12 09:20:58.210425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.207 [2024-12-12 09:20:58.210435] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.207 [2024-12-12 09:20:58.210445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.207 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.466 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.466 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.466 "name": "Existed_Raid", 00:07:24.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.466 "strip_size_kb": 64, 00:07:24.466 "state": "configuring", 00:07:24.466 "raid_level": "concat", 00:07:24.466 "superblock": false, 00:07:24.466 "num_base_bdevs": 2, 00:07:24.466 "num_base_bdevs_discovered": 0, 00:07:24.466 "num_base_bdevs_operational": 2, 00:07:24.466 "base_bdevs_list": [ 00:07:24.466 { 00:07:24.466 "name": "BaseBdev1", 00:07:24.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.466 "is_configured": false, 00:07:24.466 "data_offset": 0, 00:07:24.466 "data_size": 0 00:07:24.466 }, 00:07:24.466 { 00:07:24.466 "name": "BaseBdev2", 00:07:24.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.466 "is_configured": false, 00:07:24.466 "data_offset": 0, 00:07:24.466 "data_size": 0 00:07:24.466 } 00:07:24.466 ] 00:07:24.466 }' 00:07:24.466 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.466 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 [2024-12-12 09:20:58.653580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.725 [2024-12-12 09:20:58.653702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 [2024-12-12 09:20:58.665513] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.725 [2024-12-12 09:20:58.665613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.725 [2024-12-12 09:20:58.665645] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.725 [2024-12-12 09:20:58.665673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 [2024-12-12 09:20:58.718361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.725 BaseBdev1 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.725 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.725 [ 00:07:24.725 { 00:07:24.725 "name": "BaseBdev1", 00:07:24.725 "aliases": [ 00:07:24.725 "dacd7de6-2ae8-4272-83d3-c2651d845d79" 00:07:24.725 ], 00:07:24.725 "product_name": "Malloc disk", 00:07:24.725 "block_size": 512, 00:07:24.725 "num_blocks": 65536, 00:07:24.725 "uuid": "dacd7de6-2ae8-4272-83d3-c2651d845d79", 00:07:24.725 "assigned_rate_limits": { 00:07:24.725 "rw_ios_per_sec": 0, 00:07:24.725 "rw_mbytes_per_sec": 0, 00:07:24.725 "r_mbytes_per_sec": 0, 00:07:24.725 "w_mbytes_per_sec": 0 00:07:24.725 }, 00:07:24.725 "claimed": true, 00:07:24.725 "claim_type": "exclusive_write", 00:07:24.725 "zoned": false, 00:07:24.725 "supported_io_types": { 00:07:24.725 "read": true, 00:07:24.725 "write": true, 00:07:24.725 "unmap": true, 00:07:24.984 "flush": true, 00:07:24.984 "reset": true, 00:07:24.984 "nvme_admin": false, 00:07:24.984 "nvme_io": false, 00:07:24.984 "nvme_io_md": false, 00:07:24.984 "write_zeroes": true, 00:07:24.984 "zcopy": true, 00:07:24.984 "get_zone_info": false, 00:07:24.984 "zone_management": false, 00:07:24.984 "zone_append": false, 00:07:24.984 "compare": false, 00:07:24.984 "compare_and_write": false, 00:07:24.984 "abort": true, 00:07:24.984 "seek_hole": false, 00:07:24.984 "seek_data": false, 00:07:24.984 "copy": true, 00:07:24.984 "nvme_iov_md": false 00:07:24.984 }, 00:07:24.984 "memory_domains": [ 00:07:24.984 { 00:07:24.984 "dma_device_id": "system", 00:07:24.984 "dma_device_type": 1 00:07:24.984 }, 00:07:24.984 { 00:07:24.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.984 "dma_device_type": 2 00:07:24.984 } 00:07:24.984 ], 00:07:24.984 "driver_specific": {} 00:07:24.984 } 00:07:24.984 ] 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.984 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.985 "name": "Existed_Raid", 00:07:24.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.985 "strip_size_kb": 64, 00:07:24.985 "state": "configuring", 00:07:24.985 "raid_level": "concat", 00:07:24.985 "superblock": false, 00:07:24.985 "num_base_bdevs": 2, 00:07:24.985 "num_base_bdevs_discovered": 1, 00:07:24.985 "num_base_bdevs_operational": 2, 00:07:24.985 "base_bdevs_list": [ 00:07:24.985 { 00:07:24.985 "name": "BaseBdev1", 00:07:24.985 "uuid": "dacd7de6-2ae8-4272-83d3-c2651d845d79", 00:07:24.985 "is_configured": true, 00:07:24.985 "data_offset": 0, 00:07:24.985 "data_size": 65536 00:07:24.985 }, 00:07:24.985 { 00:07:24.985 "name": "BaseBdev2", 00:07:24.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.985 "is_configured": false, 00:07:24.985 "data_offset": 0, 00:07:24.985 "data_size": 0 00:07:24.985 } 00:07:24.985 ] 00:07:24.985 }' 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.985 09:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 [2024-12-12 09:20:59.153724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.244 [2024-12-12 09:20:59.153803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 [2024-12-12 09:20:59.165707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.244 [2024-12-12 09:20:59.167876] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.244 [2024-12-12 09:20:59.167920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.244 "name": "Existed_Raid", 00:07:25.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.244 "strip_size_kb": 64, 00:07:25.244 "state": "configuring", 00:07:25.244 "raid_level": "concat", 00:07:25.244 "superblock": false, 00:07:25.244 "num_base_bdevs": 2, 00:07:25.244 "num_base_bdevs_discovered": 1, 00:07:25.244 "num_base_bdevs_operational": 2, 00:07:25.244 "base_bdevs_list": [ 00:07:25.244 { 00:07:25.244 "name": "BaseBdev1", 00:07:25.244 "uuid": "dacd7de6-2ae8-4272-83d3-c2651d845d79", 00:07:25.244 "is_configured": true, 00:07:25.244 "data_offset": 0, 00:07:25.244 "data_size": 65536 00:07:25.244 }, 00:07:25.244 { 00:07:25.244 "name": "BaseBdev2", 00:07:25.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.244 "is_configured": false, 00:07:25.244 "data_offset": 0, 00:07:25.244 "data_size": 0 00:07:25.244 } 00:07:25.244 ] 00:07:25.244 }' 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.244 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.814 [2024-12-12 09:20:59.657991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.814 [2024-12-12 09:20:59.658162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.814 [2024-12-12 09:20:59.658200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:25.814 [2024-12-12 09:20:59.658571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.814 [2024-12-12 09:20:59.658849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.814 [2024-12-12 09:20:59.658900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:25.814 [2024-12-12 09:20:59.659269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.814 BaseBdev2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.814 [ 00:07:25.814 { 00:07:25.814 "name": "BaseBdev2", 00:07:25.814 "aliases": [ 00:07:25.814 "2278c8c4-7cea-4736-b49d-9308c92cf58f" 00:07:25.814 ], 00:07:25.814 "product_name": "Malloc disk", 00:07:25.814 "block_size": 512, 00:07:25.814 "num_blocks": 65536, 00:07:25.814 "uuid": "2278c8c4-7cea-4736-b49d-9308c92cf58f", 00:07:25.814 "assigned_rate_limits": { 00:07:25.814 "rw_ios_per_sec": 0, 00:07:25.814 "rw_mbytes_per_sec": 0, 00:07:25.814 "r_mbytes_per_sec": 0, 00:07:25.814 "w_mbytes_per_sec": 0 00:07:25.814 }, 00:07:25.814 "claimed": true, 00:07:25.814 "claim_type": "exclusive_write", 00:07:25.814 "zoned": false, 00:07:25.814 "supported_io_types": { 00:07:25.814 "read": true, 00:07:25.814 "write": true, 00:07:25.814 "unmap": true, 00:07:25.814 "flush": true, 00:07:25.814 "reset": true, 00:07:25.814 "nvme_admin": false, 00:07:25.814 "nvme_io": false, 00:07:25.814 "nvme_io_md": false, 00:07:25.814 "write_zeroes": true, 00:07:25.814 "zcopy": true, 00:07:25.814 "get_zone_info": false, 00:07:25.814 "zone_management": false, 00:07:25.814 "zone_append": false, 00:07:25.814 "compare": false, 00:07:25.814 "compare_and_write": false, 00:07:25.814 "abort": true, 00:07:25.814 "seek_hole": false, 00:07:25.814 "seek_data": false, 00:07:25.814 "copy": true, 00:07:25.814 "nvme_iov_md": false 00:07:25.814 }, 00:07:25.814 "memory_domains": [ 00:07:25.814 { 00:07:25.814 "dma_device_id": "system", 00:07:25.814 "dma_device_type": 1 00:07:25.814 }, 00:07:25.814 { 00:07:25.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.814 "dma_device_type": 2 00:07:25.814 } 00:07:25.814 ], 00:07:25.814 "driver_specific": {} 00:07:25.814 } 00:07:25.814 ] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.814 "name": "Existed_Raid", 00:07:25.814 "uuid": "b137ecdd-2325-410c-b43c-d5f12aec5979", 00:07:25.814 "strip_size_kb": 64, 00:07:25.814 "state": "online", 00:07:25.814 "raid_level": "concat", 00:07:25.814 "superblock": false, 00:07:25.814 "num_base_bdevs": 2, 00:07:25.814 "num_base_bdevs_discovered": 2, 00:07:25.814 "num_base_bdevs_operational": 2, 00:07:25.814 "base_bdevs_list": [ 00:07:25.814 { 00:07:25.814 "name": "BaseBdev1", 00:07:25.814 "uuid": "dacd7de6-2ae8-4272-83d3-c2651d845d79", 00:07:25.814 "is_configured": true, 00:07:25.814 "data_offset": 0, 00:07:25.814 "data_size": 65536 00:07:25.814 }, 00:07:25.814 { 00:07:25.814 "name": "BaseBdev2", 00:07:25.814 "uuid": "2278c8c4-7cea-4736-b49d-9308c92cf58f", 00:07:25.814 "is_configured": true, 00:07:25.814 "data_offset": 0, 00:07:25.814 "data_size": 65536 00:07:25.814 } 00:07:25.814 ] 00:07:25.814 }' 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.814 09:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.073 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.074 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.074 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.074 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.074 [2024-12-12 09:21:00.093574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.333 "name": "Existed_Raid", 00:07:26.333 "aliases": [ 00:07:26.333 "b137ecdd-2325-410c-b43c-d5f12aec5979" 00:07:26.333 ], 00:07:26.333 "product_name": "Raid Volume", 00:07:26.333 "block_size": 512, 00:07:26.333 "num_blocks": 131072, 00:07:26.333 "uuid": "b137ecdd-2325-410c-b43c-d5f12aec5979", 00:07:26.333 "assigned_rate_limits": { 00:07:26.333 "rw_ios_per_sec": 0, 00:07:26.333 "rw_mbytes_per_sec": 0, 00:07:26.333 "r_mbytes_per_sec": 0, 00:07:26.333 "w_mbytes_per_sec": 0 00:07:26.333 }, 00:07:26.333 "claimed": false, 00:07:26.333 "zoned": false, 00:07:26.333 "supported_io_types": { 00:07:26.333 "read": true, 00:07:26.333 "write": true, 00:07:26.333 "unmap": true, 00:07:26.333 "flush": true, 00:07:26.333 "reset": true, 00:07:26.333 "nvme_admin": false, 00:07:26.333 "nvme_io": false, 00:07:26.333 "nvme_io_md": false, 00:07:26.333 "write_zeroes": true, 00:07:26.333 "zcopy": false, 00:07:26.333 "get_zone_info": false, 00:07:26.333 "zone_management": false, 00:07:26.333 "zone_append": false, 00:07:26.333 "compare": false, 00:07:26.333 "compare_and_write": false, 00:07:26.333 "abort": false, 00:07:26.333 "seek_hole": false, 00:07:26.333 "seek_data": false, 00:07:26.333 "copy": false, 00:07:26.333 "nvme_iov_md": false 00:07:26.333 }, 00:07:26.333 "memory_domains": [ 00:07:26.333 { 00:07:26.333 "dma_device_id": "system", 00:07:26.333 "dma_device_type": 1 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.333 "dma_device_type": 2 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "dma_device_id": "system", 00:07:26.333 "dma_device_type": 1 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.333 "dma_device_type": 2 00:07:26.333 } 00:07:26.333 ], 00:07:26.333 "driver_specific": { 00:07:26.333 "raid": { 00:07:26.333 "uuid": "b137ecdd-2325-410c-b43c-d5f12aec5979", 00:07:26.333 "strip_size_kb": 64, 00:07:26.333 "state": "online", 00:07:26.333 "raid_level": "concat", 00:07:26.333 "superblock": false, 00:07:26.333 "num_base_bdevs": 2, 00:07:26.333 "num_base_bdevs_discovered": 2, 00:07:26.333 "num_base_bdevs_operational": 2, 00:07:26.333 "base_bdevs_list": [ 00:07:26.333 { 00:07:26.333 "name": "BaseBdev1", 00:07:26.333 "uuid": "dacd7de6-2ae8-4272-83d3-c2651d845d79", 00:07:26.333 "is_configured": true, 00:07:26.333 "data_offset": 0, 00:07:26.333 "data_size": 65536 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "name": "BaseBdev2", 00:07:26.333 "uuid": "2278c8c4-7cea-4736-b49d-9308c92cf58f", 00:07:26.333 "is_configured": true, 00:07:26.333 "data_offset": 0, 00:07:26.333 "data_size": 65536 00:07:26.333 } 00:07:26.333 ] 00:07:26.333 } 00:07:26.333 } 00:07:26.333 }' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.333 BaseBdev2' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.333 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.334 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.334 [2024-12-12 09:21:00.305083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.334 [2024-12-12 09:21:00.305136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.334 [2024-12-12 09:21:00.305196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.593 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.594 "name": "Existed_Raid", 00:07:26.594 "uuid": "b137ecdd-2325-410c-b43c-d5f12aec5979", 00:07:26.594 "strip_size_kb": 64, 00:07:26.594 "state": "offline", 00:07:26.594 "raid_level": "concat", 00:07:26.594 "superblock": false, 00:07:26.594 "num_base_bdevs": 2, 00:07:26.594 "num_base_bdevs_discovered": 1, 00:07:26.594 "num_base_bdevs_operational": 1, 00:07:26.594 "base_bdevs_list": [ 00:07:26.594 { 00:07:26.594 "name": null, 00:07:26.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.594 "is_configured": false, 00:07:26.594 "data_offset": 0, 00:07:26.594 "data_size": 65536 00:07:26.594 }, 00:07:26.594 { 00:07:26.594 "name": "BaseBdev2", 00:07:26.594 "uuid": "2278c8c4-7cea-4736-b49d-9308c92cf58f", 00:07:26.594 "is_configured": true, 00:07:26.594 "data_offset": 0, 00:07:26.594 "data_size": 65536 00:07:26.594 } 00:07:26.594 ] 00:07:26.594 }' 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.594 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.162 09:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 [2024-12-12 09:21:00.925174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.162 [2024-12-12 09:21:00.925252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62851 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62851 ']' 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62851 00:07:27.162 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62851 00:07:27.163 killing process with pid 62851 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62851' 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62851 00:07:27.163 [2024-12-12 09:21:01.099587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.163 09:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62851 00:07:27.163 [2024-12-12 09:21:01.116823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:28.542 00:07:28.542 real 0m5.023s 00:07:28.542 user 0m7.053s 00:07:28.542 sys 0m0.879s 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.542 ************************************ 00:07:28.542 END TEST raid_state_function_test 00:07:28.542 ************************************ 00:07:28.542 09:21:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:28.542 09:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.542 09:21:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.542 09:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.542 ************************************ 00:07:28.542 START TEST raid_state_function_test_sb 00:07:28.542 ************************************ 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:28.542 Process raid pid: 63103 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63103 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63103' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63103 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63103 ']' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.542 09:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.542 [2024-12-12 09:21:02.480424] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:28.542 [2024-12-12 09:21:02.480537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.801 [2024-12-12 09:21:02.655746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.801 [2024-12-12 09:21:02.791359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.060 [2024-12-12 09:21:03.031069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.060 [2024-12-12 09:21:03.031117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.319 [2024-12-12 09:21:03.296830] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.319 [2024-12-12 09:21:03.296887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.319 [2024-12-12 09:21:03.296897] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.319 [2024-12-12 09:21:03.296907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.319 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.578 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.578 "name": "Existed_Raid", 00:07:29.578 "uuid": "e6de07c0-0b72-413c-8c6e-c2d04b722bc4", 00:07:29.578 "strip_size_kb": 64, 00:07:29.578 "state": "configuring", 00:07:29.578 "raid_level": "concat", 00:07:29.578 "superblock": true, 00:07:29.578 "num_base_bdevs": 2, 00:07:29.578 "num_base_bdevs_discovered": 0, 00:07:29.578 "num_base_bdevs_operational": 2, 00:07:29.578 "base_bdevs_list": [ 00:07:29.578 { 00:07:29.578 "name": "BaseBdev1", 00:07:29.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.578 "is_configured": false, 00:07:29.578 "data_offset": 0, 00:07:29.578 "data_size": 0 00:07:29.578 }, 00:07:29.578 { 00:07:29.578 "name": "BaseBdev2", 00:07:29.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.578 "is_configured": false, 00:07:29.578 "data_offset": 0, 00:07:29.578 "data_size": 0 00:07:29.578 } 00:07:29.578 ] 00:07:29.578 }' 00:07:29.578 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.578 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 [2024-12-12 09:21:03.728149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.838 [2024-12-12 09:21:03.728238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 [2024-12-12 09:21:03.740087] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.838 [2024-12-12 09:21:03.740171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.838 [2024-12-12 09:21:03.740201] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.838 [2024-12-12 09:21:03.740229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 [2024-12-12 09:21:03.794462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.838 BaseBdev1 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 [ 00:07:29.838 { 00:07:29.838 "name": "BaseBdev1", 00:07:29.838 "aliases": [ 00:07:29.838 "e612064e-815d-4161-8304-2572739ff0ec" 00:07:29.838 ], 00:07:29.838 "product_name": "Malloc disk", 00:07:29.838 "block_size": 512, 00:07:29.838 "num_blocks": 65536, 00:07:29.838 "uuid": "e612064e-815d-4161-8304-2572739ff0ec", 00:07:29.838 "assigned_rate_limits": { 00:07:29.838 "rw_ios_per_sec": 0, 00:07:29.838 "rw_mbytes_per_sec": 0, 00:07:29.838 "r_mbytes_per_sec": 0, 00:07:29.838 "w_mbytes_per_sec": 0 00:07:29.838 }, 00:07:29.838 "claimed": true, 00:07:29.838 "claim_type": "exclusive_write", 00:07:29.838 "zoned": false, 00:07:29.838 "supported_io_types": { 00:07:29.838 "read": true, 00:07:29.838 "write": true, 00:07:29.838 "unmap": true, 00:07:29.838 "flush": true, 00:07:29.838 "reset": true, 00:07:29.838 "nvme_admin": false, 00:07:29.838 "nvme_io": false, 00:07:29.838 "nvme_io_md": false, 00:07:29.838 "write_zeroes": true, 00:07:29.838 "zcopy": true, 00:07:29.838 "get_zone_info": false, 00:07:29.838 "zone_management": false, 00:07:29.838 "zone_append": false, 00:07:29.838 "compare": false, 00:07:29.838 "compare_and_write": false, 00:07:29.838 "abort": true, 00:07:29.838 "seek_hole": false, 00:07:29.838 "seek_data": false, 00:07:29.838 "copy": true, 00:07:29.838 "nvme_iov_md": false 00:07:29.838 }, 00:07:29.838 "memory_domains": [ 00:07:29.838 { 00:07:29.838 "dma_device_id": "system", 00:07:29.838 "dma_device_type": 1 00:07:29.838 }, 00:07:29.838 { 00:07:29.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.838 "dma_device_type": 2 00:07:29.838 } 00:07:29.838 ], 00:07:29.838 "driver_specific": {} 00:07:29.838 } 00:07:29.838 ] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.098 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.098 "name": "Existed_Raid", 00:07:30.098 "uuid": "723fc43a-2ab1-48af-8c34-c4eea6199380", 00:07:30.098 "strip_size_kb": 64, 00:07:30.098 "state": "configuring", 00:07:30.098 "raid_level": "concat", 00:07:30.098 "superblock": true, 00:07:30.098 "num_base_bdevs": 2, 00:07:30.098 "num_base_bdevs_discovered": 1, 00:07:30.098 "num_base_bdevs_operational": 2, 00:07:30.098 "base_bdevs_list": [ 00:07:30.098 { 00:07:30.098 "name": "BaseBdev1", 00:07:30.098 "uuid": "e612064e-815d-4161-8304-2572739ff0ec", 00:07:30.098 "is_configured": true, 00:07:30.098 "data_offset": 2048, 00:07:30.098 "data_size": 63488 00:07:30.098 }, 00:07:30.098 { 00:07:30.098 "name": "BaseBdev2", 00:07:30.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.098 "is_configured": false, 00:07:30.098 "data_offset": 0, 00:07:30.098 "data_size": 0 00:07:30.098 } 00:07:30.098 ] 00:07:30.098 }' 00:07:30.098 09:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.098 09:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.358 [2024-12-12 09:21:04.233777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.358 [2024-12-12 09:21:04.233923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.358 [2024-12-12 09:21:04.245823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.358 [2024-12-12 09:21:04.248136] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.358 [2024-12-12 09:21:04.248241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.358 "name": "Existed_Raid", 00:07:30.358 "uuid": "048069c7-069d-4044-b1f9-67e717a90483", 00:07:30.358 "strip_size_kb": 64, 00:07:30.358 "state": "configuring", 00:07:30.358 "raid_level": "concat", 00:07:30.358 "superblock": true, 00:07:30.358 "num_base_bdevs": 2, 00:07:30.358 "num_base_bdevs_discovered": 1, 00:07:30.358 "num_base_bdevs_operational": 2, 00:07:30.358 "base_bdevs_list": [ 00:07:30.358 { 00:07:30.358 "name": "BaseBdev1", 00:07:30.358 "uuid": "e612064e-815d-4161-8304-2572739ff0ec", 00:07:30.358 "is_configured": true, 00:07:30.358 "data_offset": 2048, 00:07:30.358 "data_size": 63488 00:07:30.358 }, 00:07:30.358 { 00:07:30.358 "name": "BaseBdev2", 00:07:30.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.358 "is_configured": false, 00:07:30.358 "data_offset": 0, 00:07:30.358 "data_size": 0 00:07:30.358 } 00:07:30.358 ] 00:07:30.358 }' 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.358 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.934 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.934 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.934 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.934 BaseBdev2 00:07:30.935 [2024-12-12 09:21:04.701480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.935 [2024-12-12 09:21:04.701802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.935 [2024-12-12 09:21:04.701819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.935 [2024-12-12 09:21:04.702131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.935 [2024-12-12 09:21:04.702320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.935 [2024-12-12 09:21:04.702335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:30.935 [2024-12-12 09:21:04.702495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.935 [ 00:07:30.935 { 00:07:30.935 "name": "BaseBdev2", 00:07:30.935 "aliases": [ 00:07:30.935 "173f325c-e830-4160-9018-841008990293" 00:07:30.935 ], 00:07:30.935 "product_name": "Malloc disk", 00:07:30.935 "block_size": 512, 00:07:30.935 "num_blocks": 65536, 00:07:30.935 "uuid": "173f325c-e830-4160-9018-841008990293", 00:07:30.935 "assigned_rate_limits": { 00:07:30.935 "rw_ios_per_sec": 0, 00:07:30.935 "rw_mbytes_per_sec": 0, 00:07:30.935 "r_mbytes_per_sec": 0, 00:07:30.935 "w_mbytes_per_sec": 0 00:07:30.935 }, 00:07:30.935 "claimed": true, 00:07:30.935 "claim_type": "exclusive_write", 00:07:30.935 "zoned": false, 00:07:30.935 "supported_io_types": { 00:07:30.935 "read": true, 00:07:30.935 "write": true, 00:07:30.935 "unmap": true, 00:07:30.935 "flush": true, 00:07:30.935 "reset": true, 00:07:30.935 "nvme_admin": false, 00:07:30.935 "nvme_io": false, 00:07:30.935 "nvme_io_md": false, 00:07:30.935 "write_zeroes": true, 00:07:30.935 "zcopy": true, 00:07:30.935 "get_zone_info": false, 00:07:30.935 "zone_management": false, 00:07:30.935 "zone_append": false, 00:07:30.935 "compare": false, 00:07:30.935 "compare_and_write": false, 00:07:30.935 "abort": true, 00:07:30.935 "seek_hole": false, 00:07:30.935 "seek_data": false, 00:07:30.935 "copy": true, 00:07:30.935 "nvme_iov_md": false 00:07:30.935 }, 00:07:30.935 "memory_domains": [ 00:07:30.935 { 00:07:30.935 "dma_device_id": "system", 00:07:30.935 "dma_device_type": 1 00:07:30.935 }, 00:07:30.935 { 00:07:30.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.935 "dma_device_type": 2 00:07:30.935 } 00:07:30.935 ], 00:07:30.935 "driver_specific": {} 00:07:30.935 } 00:07:30.935 ] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.935 "name": "Existed_Raid", 00:07:30.935 "uuid": "048069c7-069d-4044-b1f9-67e717a90483", 00:07:30.935 "strip_size_kb": 64, 00:07:30.935 "state": "online", 00:07:30.935 "raid_level": "concat", 00:07:30.935 "superblock": true, 00:07:30.935 "num_base_bdevs": 2, 00:07:30.935 "num_base_bdevs_discovered": 2, 00:07:30.935 "num_base_bdevs_operational": 2, 00:07:30.935 "base_bdevs_list": [ 00:07:30.935 { 00:07:30.935 "name": "BaseBdev1", 00:07:30.935 "uuid": "e612064e-815d-4161-8304-2572739ff0ec", 00:07:30.935 "is_configured": true, 00:07:30.935 "data_offset": 2048, 00:07:30.935 "data_size": 63488 00:07:30.935 }, 00:07:30.935 { 00:07:30.935 "name": "BaseBdev2", 00:07:30.935 "uuid": "173f325c-e830-4160-9018-841008990293", 00:07:30.935 "is_configured": true, 00:07:30.935 "data_offset": 2048, 00:07:30.935 "data_size": 63488 00:07:30.935 } 00:07:30.935 ] 00:07:30.935 }' 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.935 09:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.207 [2024-12-12 09:21:05.169013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.207 "name": "Existed_Raid", 00:07:31.207 "aliases": [ 00:07:31.207 "048069c7-069d-4044-b1f9-67e717a90483" 00:07:31.207 ], 00:07:31.207 "product_name": "Raid Volume", 00:07:31.207 "block_size": 512, 00:07:31.207 "num_blocks": 126976, 00:07:31.207 "uuid": "048069c7-069d-4044-b1f9-67e717a90483", 00:07:31.207 "assigned_rate_limits": { 00:07:31.207 "rw_ios_per_sec": 0, 00:07:31.207 "rw_mbytes_per_sec": 0, 00:07:31.207 "r_mbytes_per_sec": 0, 00:07:31.207 "w_mbytes_per_sec": 0 00:07:31.207 }, 00:07:31.207 "claimed": false, 00:07:31.207 "zoned": false, 00:07:31.207 "supported_io_types": { 00:07:31.207 "read": true, 00:07:31.207 "write": true, 00:07:31.207 "unmap": true, 00:07:31.207 "flush": true, 00:07:31.207 "reset": true, 00:07:31.207 "nvme_admin": false, 00:07:31.207 "nvme_io": false, 00:07:31.207 "nvme_io_md": false, 00:07:31.207 "write_zeroes": true, 00:07:31.207 "zcopy": false, 00:07:31.207 "get_zone_info": false, 00:07:31.207 "zone_management": false, 00:07:31.207 "zone_append": false, 00:07:31.207 "compare": false, 00:07:31.207 "compare_and_write": false, 00:07:31.207 "abort": false, 00:07:31.207 "seek_hole": false, 00:07:31.207 "seek_data": false, 00:07:31.207 "copy": false, 00:07:31.207 "nvme_iov_md": false 00:07:31.207 }, 00:07:31.207 "memory_domains": [ 00:07:31.207 { 00:07:31.207 "dma_device_id": "system", 00:07:31.207 "dma_device_type": 1 00:07:31.207 }, 00:07:31.207 { 00:07:31.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.207 "dma_device_type": 2 00:07:31.207 }, 00:07:31.207 { 00:07:31.207 "dma_device_id": "system", 00:07:31.207 "dma_device_type": 1 00:07:31.207 }, 00:07:31.207 { 00:07:31.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.207 "dma_device_type": 2 00:07:31.207 } 00:07:31.207 ], 00:07:31.207 "driver_specific": { 00:07:31.207 "raid": { 00:07:31.207 "uuid": "048069c7-069d-4044-b1f9-67e717a90483", 00:07:31.207 "strip_size_kb": 64, 00:07:31.207 "state": "online", 00:07:31.207 "raid_level": "concat", 00:07:31.207 "superblock": true, 00:07:31.207 "num_base_bdevs": 2, 00:07:31.207 "num_base_bdevs_discovered": 2, 00:07:31.207 "num_base_bdevs_operational": 2, 00:07:31.207 "base_bdevs_list": [ 00:07:31.207 { 00:07:31.207 "name": "BaseBdev1", 00:07:31.207 "uuid": "e612064e-815d-4161-8304-2572739ff0ec", 00:07:31.207 "is_configured": true, 00:07:31.207 "data_offset": 2048, 00:07:31.207 "data_size": 63488 00:07:31.207 }, 00:07:31.207 { 00:07:31.207 "name": "BaseBdev2", 00:07:31.207 "uuid": "173f325c-e830-4160-9018-841008990293", 00:07:31.207 "is_configured": true, 00:07:31.207 "data_offset": 2048, 00:07:31.207 "data_size": 63488 00:07:31.207 } 00:07:31.207 ] 00:07:31.207 } 00:07:31.207 } 00:07:31.207 }' 00:07:31.207 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:31.466 BaseBdev2' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.466 [2024-12-12 09:21:05.380426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:31.466 [2024-12-12 09:21:05.380533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.466 [2024-12-12 09:21:05.380622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.466 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.725 "name": "Existed_Raid", 00:07:31.725 "uuid": "048069c7-069d-4044-b1f9-67e717a90483", 00:07:31.725 "strip_size_kb": 64, 00:07:31.725 "state": "offline", 00:07:31.725 "raid_level": "concat", 00:07:31.725 "superblock": true, 00:07:31.725 "num_base_bdevs": 2, 00:07:31.725 "num_base_bdevs_discovered": 1, 00:07:31.725 "num_base_bdevs_operational": 1, 00:07:31.725 "base_bdevs_list": [ 00:07:31.725 { 00:07:31.725 "name": null, 00:07:31.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.725 "is_configured": false, 00:07:31.725 "data_offset": 0, 00:07:31.725 "data_size": 63488 00:07:31.725 }, 00:07:31.725 { 00:07:31.725 "name": "BaseBdev2", 00:07:31.725 "uuid": "173f325c-e830-4160-9018-841008990293", 00:07:31.725 "is_configured": true, 00:07:31.725 "data_offset": 2048, 00:07:31.725 "data_size": 63488 00:07:31.725 } 00:07:31.725 ] 00:07:31.725 }' 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.725 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.984 09:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.985 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.985 09:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.985 [2024-12-12 09:21:05.959646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.985 [2024-12-12 09:21:05.959813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63103 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63103 ']' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63103 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63103 00:07:32.244 killing process with pid 63103 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63103' 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63103 00:07:32.244 [2024-12-12 09:21:06.161195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.244 09:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63103 00:07:32.244 [2024-12-12 09:21:06.179076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.624 09:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:33.624 00:07:33.624 real 0m5.005s 00:07:33.624 user 0m7.017s 00:07:33.624 sys 0m0.893s 00:07:33.624 09:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.624 09:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.624 ************************************ 00:07:33.624 END TEST raid_state_function_test_sb 00:07:33.624 ************************************ 00:07:33.624 09:21:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:33.624 09:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:33.624 09:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.624 09:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.624 ************************************ 00:07:33.624 START TEST raid_superblock_test 00:07:33.624 ************************************ 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63350 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63350 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63350 ']' 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.624 09:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.624 [2024-12-12 09:21:07.561379] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:33.624 [2024-12-12 09:21:07.561600] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63350 ] 00:07:33.884 [2024-12-12 09:21:07.730067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.884 [2024-12-12 09:21:07.865449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.143 [2024-12-12 09:21:08.087868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.143 [2024-12-12 09:21:08.088033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.403 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 malloc1 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 [2024-12-12 09:21:08.433637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.664 [2024-12-12 09:21:08.433769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.664 [2024-12-12 09:21:08.433813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:34.664 [2024-12-12 09:21:08.433842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.664 [2024-12-12 09:21:08.436314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.664 [2024-12-12 09:21:08.436402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.664 pt1 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 malloc2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 [2024-12-12 09:21:08.498765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:34.664 [2024-12-12 09:21:08.498829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.664 [2024-12-12 09:21:08.498857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:34.664 [2024-12-12 09:21:08.498866] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.664 [2024-12-12 09:21:08.501331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.664 [2024-12-12 09:21:08.501367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:34.664 pt2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 [2024-12-12 09:21:08.510817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:34.664 [2024-12-12 09:21:08.513047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:34.664 [2024-12-12 09:21:08.513224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.664 [2024-12-12 09:21:08.513238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.664 [2024-12-12 09:21:08.513513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.664 [2024-12-12 09:21:08.513685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.664 [2024-12-12 09:21:08.513697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:34.664 [2024-12-12 09:21:08.513876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.664 "name": "raid_bdev1", 00:07:34.664 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:34.664 "strip_size_kb": 64, 00:07:34.664 "state": "online", 00:07:34.664 "raid_level": "concat", 00:07:34.664 "superblock": true, 00:07:34.664 "num_base_bdevs": 2, 00:07:34.664 "num_base_bdevs_discovered": 2, 00:07:34.664 "num_base_bdevs_operational": 2, 00:07:34.664 "base_bdevs_list": [ 00:07:34.664 { 00:07:34.664 "name": "pt1", 00:07:34.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.664 "is_configured": true, 00:07:34.664 "data_offset": 2048, 00:07:34.664 "data_size": 63488 00:07:34.664 }, 00:07:34.664 { 00:07:34.664 "name": "pt2", 00:07:34.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.664 "is_configured": true, 00:07:34.664 "data_offset": 2048, 00:07:34.664 "data_size": 63488 00:07:34.664 } 00:07:34.664 ] 00:07:34.664 }' 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.664 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.185 [2024-12-12 09:21:08.958353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.185 "name": "raid_bdev1", 00:07:35.185 "aliases": [ 00:07:35.185 "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300" 00:07:35.185 ], 00:07:35.185 "product_name": "Raid Volume", 00:07:35.185 "block_size": 512, 00:07:35.185 "num_blocks": 126976, 00:07:35.185 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:35.185 "assigned_rate_limits": { 00:07:35.185 "rw_ios_per_sec": 0, 00:07:35.185 "rw_mbytes_per_sec": 0, 00:07:35.185 "r_mbytes_per_sec": 0, 00:07:35.185 "w_mbytes_per_sec": 0 00:07:35.185 }, 00:07:35.185 "claimed": false, 00:07:35.185 "zoned": false, 00:07:35.185 "supported_io_types": { 00:07:35.185 "read": true, 00:07:35.185 "write": true, 00:07:35.185 "unmap": true, 00:07:35.185 "flush": true, 00:07:35.185 "reset": true, 00:07:35.185 "nvme_admin": false, 00:07:35.185 "nvme_io": false, 00:07:35.185 "nvme_io_md": false, 00:07:35.185 "write_zeroes": true, 00:07:35.185 "zcopy": false, 00:07:35.185 "get_zone_info": false, 00:07:35.185 "zone_management": false, 00:07:35.185 "zone_append": false, 00:07:35.185 "compare": false, 00:07:35.185 "compare_and_write": false, 00:07:35.185 "abort": false, 00:07:35.185 "seek_hole": false, 00:07:35.185 "seek_data": false, 00:07:35.185 "copy": false, 00:07:35.185 "nvme_iov_md": false 00:07:35.185 }, 00:07:35.185 "memory_domains": [ 00:07:35.185 { 00:07:35.185 "dma_device_id": "system", 00:07:35.185 "dma_device_type": 1 00:07:35.185 }, 00:07:35.185 { 00:07:35.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.185 "dma_device_type": 2 00:07:35.185 }, 00:07:35.185 { 00:07:35.185 "dma_device_id": "system", 00:07:35.185 "dma_device_type": 1 00:07:35.185 }, 00:07:35.185 { 00:07:35.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.185 "dma_device_type": 2 00:07:35.185 } 00:07:35.185 ], 00:07:35.185 "driver_specific": { 00:07:35.185 "raid": { 00:07:35.185 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:35.185 "strip_size_kb": 64, 00:07:35.185 "state": "online", 00:07:35.185 "raid_level": "concat", 00:07:35.185 "superblock": true, 00:07:35.185 "num_base_bdevs": 2, 00:07:35.185 "num_base_bdevs_discovered": 2, 00:07:35.185 "num_base_bdevs_operational": 2, 00:07:35.185 "base_bdevs_list": [ 00:07:35.185 { 00:07:35.185 "name": "pt1", 00:07:35.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.185 "is_configured": true, 00:07:35.185 "data_offset": 2048, 00:07:35.185 "data_size": 63488 00:07:35.185 }, 00:07:35.185 { 00:07:35.185 "name": "pt2", 00:07:35.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.185 "is_configured": true, 00:07:35.185 "data_offset": 2048, 00:07:35.185 "data_size": 63488 00:07:35.185 } 00:07:35.185 ] 00:07:35.185 } 00:07:35.185 } 00:07:35.185 }' 00:07:35.185 09:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:35.185 pt2' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:35.185 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.186 [2024-12-12 09:21:09.146013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300 ']' 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.186 [2024-12-12 09:21:09.189592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.186 [2024-12-12 09:21:09.189676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.186 [2024-12-12 09:21:09.189811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.186 [2024-12-12 09:21:09.189873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.186 [2024-12-12 09:21:09.189890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.186 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:35.445 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 [2024-12-12 09:21:09.333351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:35.446 [2024-12-12 09:21:09.335502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:35.446 [2024-12-12 09:21:09.335575] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:35.446 [2024-12-12 09:21:09.335634] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:35.446 [2024-12-12 09:21:09.335650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.446 [2024-12-12 09:21:09.335661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:35.446 request: 00:07:35.446 { 00:07:35.446 "name": "raid_bdev1", 00:07:35.446 "raid_level": "concat", 00:07:35.446 "base_bdevs": [ 00:07:35.446 "malloc1", 00:07:35.446 "malloc2" 00:07:35.446 ], 00:07:35.446 "strip_size_kb": 64, 00:07:35.446 "superblock": false, 00:07:35.446 "method": "bdev_raid_create", 00:07:35.446 "req_id": 1 00:07:35.446 } 00:07:35.446 Got JSON-RPC error response 00:07:35.446 response: 00:07:35.446 { 00:07:35.446 "code": -17, 00:07:35.446 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:35.446 } 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 [2024-12-12 09:21:09.397198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:35.446 [2024-12-12 09:21:09.397292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.446 [2024-12-12 09:21:09.397326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:35.446 [2024-12-12 09:21:09.397356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.446 [2024-12-12 09:21:09.399823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.446 [2024-12-12 09:21:09.399922] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:35.446 [2024-12-12 09:21:09.400045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:35.446 [2024-12-12 09:21:09.400146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:35.446 pt1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.446 "name": "raid_bdev1", 00:07:35.446 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:35.446 "strip_size_kb": 64, 00:07:35.446 "state": "configuring", 00:07:35.446 "raid_level": "concat", 00:07:35.446 "superblock": true, 00:07:35.446 "num_base_bdevs": 2, 00:07:35.446 "num_base_bdevs_discovered": 1, 00:07:35.446 "num_base_bdevs_operational": 2, 00:07:35.446 "base_bdevs_list": [ 00:07:35.446 { 00:07:35.446 "name": "pt1", 00:07:35.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.446 "is_configured": true, 00:07:35.446 "data_offset": 2048, 00:07:35.446 "data_size": 63488 00:07:35.446 }, 00:07:35.446 { 00:07:35.446 "name": null, 00:07:35.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.446 "is_configured": false, 00:07:35.446 "data_offset": 2048, 00:07:35.446 "data_size": 63488 00:07:35.446 } 00:07:35.446 ] 00:07:35.446 }' 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.446 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.015 [2024-12-12 09:21:09.848459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.015 [2024-12-12 09:21:09.848538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.015 [2024-12-12 09:21:09.848561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:36.015 [2024-12-12 09:21:09.848573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.015 [2024-12-12 09:21:09.849079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.015 [2024-12-12 09:21:09.849101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.015 [2024-12-12 09:21:09.849191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:36.015 [2024-12-12 09:21:09.849220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.015 [2024-12-12 09:21:09.849335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.015 [2024-12-12 09:21:09.849346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.015 [2024-12-12 09:21:09.849600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:36.015 [2024-12-12 09:21:09.849743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.015 [2024-12-12 09:21:09.849751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:36.015 [2024-12-12 09:21:09.849886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.015 pt2 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.015 "name": "raid_bdev1", 00:07:36.015 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:36.015 "strip_size_kb": 64, 00:07:36.015 "state": "online", 00:07:36.015 "raid_level": "concat", 00:07:36.015 "superblock": true, 00:07:36.015 "num_base_bdevs": 2, 00:07:36.015 "num_base_bdevs_discovered": 2, 00:07:36.015 "num_base_bdevs_operational": 2, 00:07:36.015 "base_bdevs_list": [ 00:07:36.015 { 00:07:36.015 "name": "pt1", 00:07:36.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.015 "is_configured": true, 00:07:36.015 "data_offset": 2048, 00:07:36.015 "data_size": 63488 00:07:36.015 }, 00:07:36.015 { 00:07:36.015 "name": "pt2", 00:07:36.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.015 "is_configured": true, 00:07:36.015 "data_offset": 2048, 00:07:36.015 "data_size": 63488 00:07:36.015 } 00:07:36.015 ] 00:07:36.015 }' 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.015 09:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.275 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.275 [2024-12-12 09:21:10.292007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.535 "name": "raid_bdev1", 00:07:36.535 "aliases": [ 00:07:36.535 "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300" 00:07:36.535 ], 00:07:36.535 "product_name": "Raid Volume", 00:07:36.535 "block_size": 512, 00:07:36.535 "num_blocks": 126976, 00:07:36.535 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:36.535 "assigned_rate_limits": { 00:07:36.535 "rw_ios_per_sec": 0, 00:07:36.535 "rw_mbytes_per_sec": 0, 00:07:36.535 "r_mbytes_per_sec": 0, 00:07:36.535 "w_mbytes_per_sec": 0 00:07:36.535 }, 00:07:36.535 "claimed": false, 00:07:36.535 "zoned": false, 00:07:36.535 "supported_io_types": { 00:07:36.535 "read": true, 00:07:36.535 "write": true, 00:07:36.535 "unmap": true, 00:07:36.535 "flush": true, 00:07:36.535 "reset": true, 00:07:36.535 "nvme_admin": false, 00:07:36.535 "nvme_io": false, 00:07:36.535 "nvme_io_md": false, 00:07:36.535 "write_zeroes": true, 00:07:36.535 "zcopy": false, 00:07:36.535 "get_zone_info": false, 00:07:36.535 "zone_management": false, 00:07:36.535 "zone_append": false, 00:07:36.535 "compare": false, 00:07:36.535 "compare_and_write": false, 00:07:36.535 "abort": false, 00:07:36.535 "seek_hole": false, 00:07:36.535 "seek_data": false, 00:07:36.535 "copy": false, 00:07:36.535 "nvme_iov_md": false 00:07:36.535 }, 00:07:36.535 "memory_domains": [ 00:07:36.535 { 00:07:36.535 "dma_device_id": "system", 00:07:36.535 "dma_device_type": 1 00:07:36.535 }, 00:07:36.535 { 00:07:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.535 "dma_device_type": 2 00:07:36.535 }, 00:07:36.535 { 00:07:36.535 "dma_device_id": "system", 00:07:36.535 "dma_device_type": 1 00:07:36.535 }, 00:07:36.535 { 00:07:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.535 "dma_device_type": 2 00:07:36.535 } 00:07:36.535 ], 00:07:36.535 "driver_specific": { 00:07:36.535 "raid": { 00:07:36.535 "uuid": "2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300", 00:07:36.535 "strip_size_kb": 64, 00:07:36.535 "state": "online", 00:07:36.535 "raid_level": "concat", 00:07:36.535 "superblock": true, 00:07:36.535 "num_base_bdevs": 2, 00:07:36.535 "num_base_bdevs_discovered": 2, 00:07:36.535 "num_base_bdevs_operational": 2, 00:07:36.535 "base_bdevs_list": [ 00:07:36.535 { 00:07:36.535 "name": "pt1", 00:07:36.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.535 "is_configured": true, 00:07:36.535 "data_offset": 2048, 00:07:36.535 "data_size": 63488 00:07:36.535 }, 00:07:36.535 { 00:07:36.535 "name": "pt2", 00:07:36.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.535 "is_configured": true, 00:07:36.535 "data_offset": 2048, 00:07:36.535 "data_size": 63488 00:07:36.535 } 00:07:36.535 ] 00:07:36.535 } 00:07:36.535 } 00:07:36.535 }' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:36.535 pt2' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:36.536 [2024-12-12 09:21:10.515552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.536 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300 '!=' 2a5c8b57-6ab5-4532-bcb2-c55c4b0fb300 ']' 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63350 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63350 ']' 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63350 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63350 00:07:36.796 killing process with pid 63350 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63350' 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63350 00:07:36.796 09:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63350 00:07:36.796 [2024-12-12 09:21:10.598260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.796 [2024-12-12 09:21:10.598388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.796 [2024-12-12 09:21:10.598509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.796 [2024-12-12 09:21:10.598526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:36.796 [2024-12-12 09:21:10.816690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.176 09:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:38.176 00:07:38.176 real 0m4.562s 00:07:38.176 user 0m6.244s 00:07:38.176 sys 0m0.835s 00:07:38.176 09:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.176 09:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.176 ************************************ 00:07:38.176 END TEST raid_superblock_test 00:07:38.176 ************************************ 00:07:38.176 09:21:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:38.176 09:21:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.176 09:21:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.176 09:21:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.176 ************************************ 00:07:38.176 START TEST raid_read_error_test 00:07:38.176 ************************************ 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GLPVdjEp8h 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63562 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63562 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63562 ']' 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.176 09:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.435 [2024-12-12 09:21:12.200819] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:38.436 [2024-12-12 09:21:12.201037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63562 ] 00:07:38.436 [2024-12-12 09:21:12.373980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.695 [2024-12-12 09:21:12.513029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.960 [2024-12-12 09:21:12.745685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.960 [2024-12-12 09:21:12.745804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 BaseBdev1_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 true 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 [2024-12-12 09:21:13.081727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.222 [2024-12-12 09:21:13.081791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.222 [2024-12-12 09:21:13.081814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.222 [2024-12-12 09:21:13.081826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.222 [2024-12-12 09:21:13.084263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.222 [2024-12-12 09:21:13.084355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.222 BaseBdev1 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 BaseBdev2_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 true 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 [2024-12-12 09:21:13.154968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.222 [2024-12-12 09:21:13.155029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.222 [2024-12-12 09:21:13.155048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.222 [2024-12-12 09:21:13.155059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.222 [2024-12-12 09:21:13.157453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.222 [2024-12-12 09:21:13.157491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.222 BaseBdev2 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 [2024-12-12 09:21:13.167007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.222 [2024-12-12 09:21:13.169085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.222 [2024-12-12 09:21:13.169271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.222 [2024-12-12 09:21:13.169287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.222 [2024-12-12 09:21:13.169526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.222 [2024-12-12 09:21:13.169704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.222 [2024-12-12 09:21:13.169716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.222 [2024-12-12 09:21:13.169854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.222 "name": "raid_bdev1", 00:07:39.222 "uuid": "24dd6959-3a7a-4d4f-9952-0599669cdc06", 00:07:39.222 "strip_size_kb": 64, 00:07:39.222 "state": "online", 00:07:39.222 "raid_level": "concat", 00:07:39.222 "superblock": true, 00:07:39.222 "num_base_bdevs": 2, 00:07:39.222 "num_base_bdevs_discovered": 2, 00:07:39.222 "num_base_bdevs_operational": 2, 00:07:39.222 "base_bdevs_list": [ 00:07:39.222 { 00:07:39.222 "name": "BaseBdev1", 00:07:39.222 "uuid": "6f90b10c-7adf-5972-90d2-19557863443c", 00:07:39.222 "is_configured": true, 00:07:39.222 "data_offset": 2048, 00:07:39.222 "data_size": 63488 00:07:39.222 }, 00:07:39.222 { 00:07:39.222 "name": "BaseBdev2", 00:07:39.222 "uuid": "e52e827b-3d32-5d4b-8bf9-a0a1055983ef", 00:07:39.222 "is_configured": true, 00:07:39.222 "data_offset": 2048, 00:07:39.222 "data_size": 63488 00:07:39.222 } 00:07:39.222 ] 00:07:39.222 }' 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.222 09:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.790 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.791 09:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.791 [2024-12-12 09:21:13.695547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.729 "name": "raid_bdev1", 00:07:40.729 "uuid": "24dd6959-3a7a-4d4f-9952-0599669cdc06", 00:07:40.729 "strip_size_kb": 64, 00:07:40.729 "state": "online", 00:07:40.729 "raid_level": "concat", 00:07:40.729 "superblock": true, 00:07:40.729 "num_base_bdevs": 2, 00:07:40.729 "num_base_bdevs_discovered": 2, 00:07:40.729 "num_base_bdevs_operational": 2, 00:07:40.729 "base_bdevs_list": [ 00:07:40.729 { 00:07:40.729 "name": "BaseBdev1", 00:07:40.729 "uuid": "6f90b10c-7adf-5972-90d2-19557863443c", 00:07:40.729 "is_configured": true, 00:07:40.729 "data_offset": 2048, 00:07:40.729 "data_size": 63488 00:07:40.729 }, 00:07:40.729 { 00:07:40.729 "name": "BaseBdev2", 00:07:40.729 "uuid": "e52e827b-3d32-5d4b-8bf9-a0a1055983ef", 00:07:40.729 "is_configured": true, 00:07:40.729 "data_offset": 2048, 00:07:40.729 "data_size": 63488 00:07:40.729 } 00:07:40.729 ] 00:07:40.729 }' 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.729 09:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.296 [2024-12-12 09:21:15.052328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.296 [2024-12-12 09:21:15.052478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.296 [2024-12-12 09:21:15.055183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.296 [2024-12-12 09:21:15.055273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.296 [2024-12-12 09:21:15.055328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.296 [2024-12-12 09:21:15.055391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.296 { 00:07:41.296 "results": [ 00:07:41.296 { 00:07:41.296 "job": "raid_bdev1", 00:07:41.296 "core_mask": "0x1", 00:07:41.296 "workload": "randrw", 00:07:41.296 "percentage": 50, 00:07:41.296 "status": "finished", 00:07:41.296 "queue_depth": 1, 00:07:41.296 "io_size": 131072, 00:07:41.296 "runtime": 1.357428, 00:07:41.296 "iops": 14290.997386233377, 00:07:41.296 "mibps": 1786.3746732791722, 00:07:41.296 "io_failed": 1, 00:07:41.296 "io_timeout": 0, 00:07:41.296 "avg_latency_us": 98.11208931706658, 00:07:41.296 "min_latency_us": 25.6, 00:07:41.296 "max_latency_us": 1373.6803493449781 00:07:41.296 } 00:07:41.296 ], 00:07:41.296 "core_count": 1 00:07:41.296 } 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63562 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63562 ']' 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63562 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63562 00:07:41.296 killing process with pid 63562 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63562' 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63562 00:07:41.296 [2024-12-12 09:21:15.098826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.296 09:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63562 00:07:41.296 [2024-12-12 09:21:15.245920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GLPVdjEp8h 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:42.675 00:07:42.675 real 0m4.436s 00:07:42.675 user 0m5.165s 00:07:42.675 sys 0m0.636s 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.675 ************************************ 00:07:42.675 END TEST raid_read_error_test 00:07:42.675 ************************************ 00:07:42.675 09:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.675 09:21:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:42.675 09:21:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.675 09:21:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.675 09:21:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.675 ************************************ 00:07:42.675 START TEST raid_write_error_test 00:07:42.675 ************************************ 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iPyWqW4dsU 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63706 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63706 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63706 ']' 00:07:42.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.675 09:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 [2024-12-12 09:21:16.706294] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:42.935 [2024-12-12 09:21:16.706430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63706 ] 00:07:42.935 [2024-12-12 09:21:16.881454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.194 [2024-12-12 09:21:17.022216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.453 [2024-12-12 09:21:17.250403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.453 [2024-12-12 09:21:17.250574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 BaseBdev1_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 true 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 [2024-12-12 09:21:17.603786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.712 [2024-12-12 09:21:17.603854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.712 [2024-12-12 09:21:17.603876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.712 [2024-12-12 09:21:17.603887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.712 [2024-12-12 09:21:17.606240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.712 [2024-12-12 09:21:17.606290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.712 BaseBdev1 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 BaseBdev2_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 true 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 [2024-12-12 09:21:17.674068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.712 [2024-12-12 09:21:17.674127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.712 [2024-12-12 09:21:17.674143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.712 [2024-12-12 09:21:17.674154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.712 [2024-12-12 09:21:17.676487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.712 [2024-12-12 09:21:17.676526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.712 BaseBdev2 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 [2024-12-12 09:21:17.686127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.712 [2024-12-12 09:21:17.688279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.712 [2024-12-12 09:21:17.688473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:43.712 [2024-12-12 09:21:17.688489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.712 [2024-12-12 09:21:17.688733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:43.712 [2024-12-12 09:21:17.688954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:43.712 [2024-12-12 09:21:17.688968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:43.712 [2024-12-12 09:21:17.689226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.712 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.713 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.971 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.971 "name": "raid_bdev1", 00:07:43.971 "uuid": "65b47c8b-75af-45c4-82a1-b84f8bbb28e6", 00:07:43.971 "strip_size_kb": 64, 00:07:43.971 "state": "online", 00:07:43.971 "raid_level": "concat", 00:07:43.971 "superblock": true, 00:07:43.971 "num_base_bdevs": 2, 00:07:43.971 "num_base_bdevs_discovered": 2, 00:07:43.971 "num_base_bdevs_operational": 2, 00:07:43.971 "base_bdevs_list": [ 00:07:43.971 { 00:07:43.971 "name": "BaseBdev1", 00:07:43.971 "uuid": "2adea606-168f-5de6-bbcf-11720bcd3a7d", 00:07:43.971 "is_configured": true, 00:07:43.971 "data_offset": 2048, 00:07:43.971 "data_size": 63488 00:07:43.971 }, 00:07:43.971 { 00:07:43.971 "name": "BaseBdev2", 00:07:43.971 "uuid": "02668e84-2da4-57c6-9f90-0e7a3a4987ef", 00:07:43.971 "is_configured": true, 00:07:43.971 "data_offset": 2048, 00:07:43.971 "data_size": 63488 00:07:43.971 } 00:07:43.971 ] 00:07:43.971 }' 00:07:43.971 09:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.971 09:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.231 09:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.231 09:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.231 [2024-12-12 09:21:18.226761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.169 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.428 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.428 "name": "raid_bdev1", 00:07:45.428 "uuid": "65b47c8b-75af-45c4-82a1-b84f8bbb28e6", 00:07:45.428 "strip_size_kb": 64, 00:07:45.428 "state": "online", 00:07:45.428 "raid_level": "concat", 00:07:45.428 "superblock": true, 00:07:45.428 "num_base_bdevs": 2, 00:07:45.428 "num_base_bdevs_discovered": 2, 00:07:45.428 "num_base_bdevs_operational": 2, 00:07:45.428 "base_bdevs_list": [ 00:07:45.428 { 00:07:45.428 "name": "BaseBdev1", 00:07:45.428 "uuid": "2adea606-168f-5de6-bbcf-11720bcd3a7d", 00:07:45.428 "is_configured": true, 00:07:45.428 "data_offset": 2048, 00:07:45.428 "data_size": 63488 00:07:45.428 }, 00:07:45.428 { 00:07:45.428 "name": "BaseBdev2", 00:07:45.428 "uuid": "02668e84-2da4-57c6-9f90-0e7a3a4987ef", 00:07:45.428 "is_configured": true, 00:07:45.428 "data_offset": 2048, 00:07:45.428 "data_size": 63488 00:07:45.428 } 00:07:45.428 ] 00:07:45.428 }' 00:07:45.428 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.428 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.688 [2024-12-12 09:21:19.579190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.688 [2024-12-12 09:21:19.579341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.688 [2024-12-12 09:21:19.582006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.688 [2024-12-12 09:21:19.582104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.688 [2024-12-12 09:21:19.582158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.688 [2024-12-12 09:21:19.582220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.688 { 00:07:45.688 "results": [ 00:07:45.688 { 00:07:45.688 "job": "raid_bdev1", 00:07:45.688 "core_mask": "0x1", 00:07:45.688 "workload": "randrw", 00:07:45.688 "percentage": 50, 00:07:45.688 "status": "finished", 00:07:45.688 "queue_depth": 1, 00:07:45.688 "io_size": 131072, 00:07:45.688 "runtime": 1.353119, 00:07:45.688 "iops": 14242.649759555516, 00:07:45.688 "mibps": 1780.3312199444395, 00:07:45.688 "io_failed": 1, 00:07:45.688 "io_timeout": 0, 00:07:45.688 "avg_latency_us": 98.35202248003122, 00:07:45.688 "min_latency_us": 25.7117903930131, 00:07:45.688 "max_latency_us": 1409.4532751091704 00:07:45.688 } 00:07:45.688 ], 00:07:45.688 "core_count": 1 00:07:45.688 } 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63706 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63706 ']' 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63706 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63706 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.688 killing process with pid 63706 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63706' 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63706 00:07:45.688 [2024-12-12 09:21:19.626339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.688 09:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63706 00:07:45.946 [2024-12-12 09:21:19.770604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iPyWqW4dsU 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:47.321 00:07:47.321 real 0m4.450s 00:07:47.321 user 0m5.238s 00:07:47.321 sys 0m0.598s 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.321 09:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 ************************************ 00:07:47.321 END TEST raid_write_error_test 00:07:47.321 ************************************ 00:07:47.321 09:21:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:47.321 09:21:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:47.321 09:21:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.321 09:21:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.321 09:21:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 ************************************ 00:07:47.321 START TEST raid_state_function_test 00:07:47.321 ************************************ 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63851 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63851' 00:07:47.321 Process raid pid: 63851 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63851 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63851 ']' 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.321 09:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 [2024-12-12 09:21:21.212554] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:47.321 [2024-12-12 09:21:21.212663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.580 [2024-12-12 09:21:21.366953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.580 [2024-12-12 09:21:21.507995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.838 [2024-12-12 09:21:21.749910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.838 [2024-12-12 09:21:21.749973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.096 [2024-12-12 09:21:22.057437] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.096 [2024-12-12 09:21:22.057511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.096 [2024-12-12 09:21:22.057521] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.096 [2024-12-12 09:21:22.057532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:48.096 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.097 "name": "Existed_Raid", 00:07:48.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.097 "strip_size_kb": 0, 00:07:48.097 "state": "configuring", 00:07:48.097 "raid_level": "raid1", 00:07:48.097 "superblock": false, 00:07:48.097 "num_base_bdevs": 2, 00:07:48.097 "num_base_bdevs_discovered": 0, 00:07:48.097 "num_base_bdevs_operational": 2, 00:07:48.097 "base_bdevs_list": [ 00:07:48.097 { 00:07:48.097 "name": "BaseBdev1", 00:07:48.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.097 "is_configured": false, 00:07:48.097 "data_offset": 0, 00:07:48.097 "data_size": 0 00:07:48.097 }, 00:07:48.097 { 00:07:48.097 "name": "BaseBdev2", 00:07:48.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.097 "is_configured": false, 00:07:48.097 "data_offset": 0, 00:07:48.097 "data_size": 0 00:07:48.097 } 00:07:48.097 ] 00:07:48.097 }' 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.097 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.663 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.663 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.663 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.663 [2024-12-12 09:21:22.484628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.663 [2024-12-12 09:21:22.484767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:48.663 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.664 [2024-12-12 09:21:22.492597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.664 [2024-12-12 09:21:22.492645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.664 [2024-12-12 09:21:22.492655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.664 [2024-12-12 09:21:22.492668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.664 [2024-12-12 09:21:22.540795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.664 BaseBdev1 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.664 [ 00:07:48.664 { 00:07:48.664 "name": "BaseBdev1", 00:07:48.664 "aliases": [ 00:07:48.664 "c210d3ba-d530-4ea0-8f86-83ce921a17ca" 00:07:48.664 ], 00:07:48.664 "product_name": "Malloc disk", 00:07:48.664 "block_size": 512, 00:07:48.664 "num_blocks": 65536, 00:07:48.664 "uuid": "c210d3ba-d530-4ea0-8f86-83ce921a17ca", 00:07:48.664 "assigned_rate_limits": { 00:07:48.664 "rw_ios_per_sec": 0, 00:07:48.664 "rw_mbytes_per_sec": 0, 00:07:48.664 "r_mbytes_per_sec": 0, 00:07:48.664 "w_mbytes_per_sec": 0 00:07:48.664 }, 00:07:48.664 "claimed": true, 00:07:48.664 "claim_type": "exclusive_write", 00:07:48.664 "zoned": false, 00:07:48.664 "supported_io_types": { 00:07:48.664 "read": true, 00:07:48.664 "write": true, 00:07:48.664 "unmap": true, 00:07:48.664 "flush": true, 00:07:48.664 "reset": true, 00:07:48.664 "nvme_admin": false, 00:07:48.664 "nvme_io": false, 00:07:48.664 "nvme_io_md": false, 00:07:48.664 "write_zeroes": true, 00:07:48.664 "zcopy": true, 00:07:48.664 "get_zone_info": false, 00:07:48.664 "zone_management": false, 00:07:48.664 "zone_append": false, 00:07:48.664 "compare": false, 00:07:48.664 "compare_and_write": false, 00:07:48.664 "abort": true, 00:07:48.664 "seek_hole": false, 00:07:48.664 "seek_data": false, 00:07:48.664 "copy": true, 00:07:48.664 "nvme_iov_md": false 00:07:48.664 }, 00:07:48.664 "memory_domains": [ 00:07:48.664 { 00:07:48.664 "dma_device_id": "system", 00:07:48.664 "dma_device_type": 1 00:07:48.664 }, 00:07:48.664 { 00:07:48.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.664 "dma_device_type": 2 00:07:48.664 } 00:07:48.664 ], 00:07:48.664 "driver_specific": {} 00:07:48.664 } 00:07:48.664 ] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.664 "name": "Existed_Raid", 00:07:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.664 "strip_size_kb": 0, 00:07:48.664 "state": "configuring", 00:07:48.664 "raid_level": "raid1", 00:07:48.664 "superblock": false, 00:07:48.664 "num_base_bdevs": 2, 00:07:48.664 "num_base_bdevs_discovered": 1, 00:07:48.664 "num_base_bdevs_operational": 2, 00:07:48.664 "base_bdevs_list": [ 00:07:48.664 { 00:07:48.664 "name": "BaseBdev1", 00:07:48.664 "uuid": "c210d3ba-d530-4ea0-8f86-83ce921a17ca", 00:07:48.664 "is_configured": true, 00:07:48.664 "data_offset": 0, 00:07:48.664 "data_size": 65536 00:07:48.664 }, 00:07:48.664 { 00:07:48.664 "name": "BaseBdev2", 00:07:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.664 "is_configured": false, 00:07:48.664 "data_offset": 0, 00:07:48.664 "data_size": 0 00:07:48.664 } 00:07:48.664 ] 00:07:48.664 }' 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.664 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 [2024-12-12 09:21:22.980076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.231 [2024-12-12 09:21:22.980231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 [2024-12-12 09:21:22.992083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.231 [2024-12-12 09:21:22.994200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.231 [2024-12-12 09:21:22.994290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.231 09:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.231 "name": "Existed_Raid", 00:07:49.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.231 "strip_size_kb": 0, 00:07:49.231 "state": "configuring", 00:07:49.231 "raid_level": "raid1", 00:07:49.231 "superblock": false, 00:07:49.231 "num_base_bdevs": 2, 00:07:49.231 "num_base_bdevs_discovered": 1, 00:07:49.231 "num_base_bdevs_operational": 2, 00:07:49.231 "base_bdevs_list": [ 00:07:49.231 { 00:07:49.231 "name": "BaseBdev1", 00:07:49.231 "uuid": "c210d3ba-d530-4ea0-8f86-83ce921a17ca", 00:07:49.231 "is_configured": true, 00:07:49.231 "data_offset": 0, 00:07:49.231 "data_size": 65536 00:07:49.231 }, 00:07:49.231 { 00:07:49.231 "name": "BaseBdev2", 00:07:49.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.231 "is_configured": false, 00:07:49.231 "data_offset": 0, 00:07:49.231 "data_size": 0 00:07:49.231 } 00:07:49.231 ] 00:07:49.231 }' 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.231 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.501 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.501 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.501 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.501 [2024-12-12 09:21:23.508339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.501 [2024-12-12 09:21:23.508487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.501 [2024-12-12 09:21:23.508514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:49.501 [2024-12-12 09:21:23.508845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.501 [2024-12-12 09:21:23.509100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.501 [2024-12-12 09:21:23.509149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:49.502 [2024-12-12 09:21:23.509465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.502 BaseBdev2 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.502 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.780 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.780 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.780 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.780 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.780 [ 00:07:49.780 { 00:07:49.780 "name": "BaseBdev2", 00:07:49.780 "aliases": [ 00:07:49.780 "c6f3e17b-da84-4cab-a4d6-39e726fd98f4" 00:07:49.780 ], 00:07:49.780 "product_name": "Malloc disk", 00:07:49.780 "block_size": 512, 00:07:49.780 "num_blocks": 65536, 00:07:49.780 "uuid": "c6f3e17b-da84-4cab-a4d6-39e726fd98f4", 00:07:49.780 "assigned_rate_limits": { 00:07:49.780 "rw_ios_per_sec": 0, 00:07:49.780 "rw_mbytes_per_sec": 0, 00:07:49.780 "r_mbytes_per_sec": 0, 00:07:49.780 "w_mbytes_per_sec": 0 00:07:49.780 }, 00:07:49.780 "claimed": true, 00:07:49.780 "claim_type": "exclusive_write", 00:07:49.780 "zoned": false, 00:07:49.780 "supported_io_types": { 00:07:49.780 "read": true, 00:07:49.780 "write": true, 00:07:49.780 "unmap": true, 00:07:49.780 "flush": true, 00:07:49.780 "reset": true, 00:07:49.780 "nvme_admin": false, 00:07:49.780 "nvme_io": false, 00:07:49.780 "nvme_io_md": false, 00:07:49.780 "write_zeroes": true, 00:07:49.780 "zcopy": true, 00:07:49.780 "get_zone_info": false, 00:07:49.780 "zone_management": false, 00:07:49.780 "zone_append": false, 00:07:49.780 "compare": false, 00:07:49.780 "compare_and_write": false, 00:07:49.780 "abort": true, 00:07:49.780 "seek_hole": false, 00:07:49.780 "seek_data": false, 00:07:49.780 "copy": true, 00:07:49.780 "nvme_iov_md": false 00:07:49.780 }, 00:07:49.780 "memory_domains": [ 00:07:49.780 { 00:07:49.780 "dma_device_id": "system", 00:07:49.780 "dma_device_type": 1 00:07:49.780 }, 00:07:49.780 { 00:07:49.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.780 "dma_device_type": 2 00:07:49.780 } 00:07:49.780 ], 00:07:49.781 "driver_specific": {} 00:07:49.781 } 00:07:49.781 ] 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.781 "name": "Existed_Raid", 00:07:49.781 "uuid": "79a7753d-3f55-4282-9389-f51ab7302d3a", 00:07:49.781 "strip_size_kb": 0, 00:07:49.781 "state": "online", 00:07:49.781 "raid_level": "raid1", 00:07:49.781 "superblock": false, 00:07:49.781 "num_base_bdevs": 2, 00:07:49.781 "num_base_bdevs_discovered": 2, 00:07:49.781 "num_base_bdevs_operational": 2, 00:07:49.781 "base_bdevs_list": [ 00:07:49.781 { 00:07:49.781 "name": "BaseBdev1", 00:07:49.781 "uuid": "c210d3ba-d530-4ea0-8f86-83ce921a17ca", 00:07:49.781 "is_configured": true, 00:07:49.781 "data_offset": 0, 00:07:49.781 "data_size": 65536 00:07:49.781 }, 00:07:49.781 { 00:07:49.781 "name": "BaseBdev2", 00:07:49.781 "uuid": "c6f3e17b-da84-4cab-a4d6-39e726fd98f4", 00:07:49.781 "is_configured": true, 00:07:49.781 "data_offset": 0, 00:07:49.781 "data_size": 65536 00:07:49.781 } 00:07:49.781 ] 00:07:49.781 }' 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.781 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.040 09:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.040 [2024-12-12 09:21:24.004014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.040 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.040 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.040 "name": "Existed_Raid", 00:07:50.040 "aliases": [ 00:07:50.040 "79a7753d-3f55-4282-9389-f51ab7302d3a" 00:07:50.040 ], 00:07:50.040 "product_name": "Raid Volume", 00:07:50.040 "block_size": 512, 00:07:50.040 "num_blocks": 65536, 00:07:50.040 "uuid": "79a7753d-3f55-4282-9389-f51ab7302d3a", 00:07:50.040 "assigned_rate_limits": { 00:07:50.040 "rw_ios_per_sec": 0, 00:07:50.040 "rw_mbytes_per_sec": 0, 00:07:50.040 "r_mbytes_per_sec": 0, 00:07:50.040 "w_mbytes_per_sec": 0 00:07:50.040 }, 00:07:50.040 "claimed": false, 00:07:50.040 "zoned": false, 00:07:50.040 "supported_io_types": { 00:07:50.040 "read": true, 00:07:50.040 "write": true, 00:07:50.040 "unmap": false, 00:07:50.040 "flush": false, 00:07:50.040 "reset": true, 00:07:50.040 "nvme_admin": false, 00:07:50.040 "nvme_io": false, 00:07:50.040 "nvme_io_md": false, 00:07:50.040 "write_zeroes": true, 00:07:50.040 "zcopy": false, 00:07:50.040 "get_zone_info": false, 00:07:50.040 "zone_management": false, 00:07:50.040 "zone_append": false, 00:07:50.040 "compare": false, 00:07:50.040 "compare_and_write": false, 00:07:50.040 "abort": false, 00:07:50.040 "seek_hole": false, 00:07:50.040 "seek_data": false, 00:07:50.040 "copy": false, 00:07:50.040 "nvme_iov_md": false 00:07:50.040 }, 00:07:50.040 "memory_domains": [ 00:07:50.040 { 00:07:50.040 "dma_device_id": "system", 00:07:50.040 "dma_device_type": 1 00:07:50.040 }, 00:07:50.040 { 00:07:50.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.040 "dma_device_type": 2 00:07:50.040 }, 00:07:50.040 { 00:07:50.040 "dma_device_id": "system", 00:07:50.040 "dma_device_type": 1 00:07:50.040 }, 00:07:50.040 { 00:07:50.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.040 "dma_device_type": 2 00:07:50.040 } 00:07:50.040 ], 00:07:50.040 "driver_specific": { 00:07:50.040 "raid": { 00:07:50.040 "uuid": "79a7753d-3f55-4282-9389-f51ab7302d3a", 00:07:50.040 "strip_size_kb": 0, 00:07:50.040 "state": "online", 00:07:50.040 "raid_level": "raid1", 00:07:50.040 "superblock": false, 00:07:50.040 "num_base_bdevs": 2, 00:07:50.040 "num_base_bdevs_discovered": 2, 00:07:50.040 "num_base_bdevs_operational": 2, 00:07:50.040 "base_bdevs_list": [ 00:07:50.040 { 00:07:50.040 "name": "BaseBdev1", 00:07:50.040 "uuid": "c210d3ba-d530-4ea0-8f86-83ce921a17ca", 00:07:50.040 "is_configured": true, 00:07:50.040 "data_offset": 0, 00:07:50.040 "data_size": 65536 00:07:50.040 }, 00:07:50.040 { 00:07:50.040 "name": "BaseBdev2", 00:07:50.040 "uuid": "c6f3e17b-da84-4cab-a4d6-39e726fd98f4", 00:07:50.040 "is_configured": true, 00:07:50.040 "data_offset": 0, 00:07:50.040 "data_size": 65536 00:07:50.040 } 00:07:50.040 ] 00:07:50.040 } 00:07:50.040 } 00:07:50.040 }' 00:07:50.040 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.299 BaseBdev2' 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.299 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.300 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.300 [2024-12-12 09:21:24.235268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.558 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.558 "name": "Existed_Raid", 00:07:50.558 "uuid": "79a7753d-3f55-4282-9389-f51ab7302d3a", 00:07:50.558 "strip_size_kb": 0, 00:07:50.558 "state": "online", 00:07:50.558 "raid_level": "raid1", 00:07:50.558 "superblock": false, 00:07:50.559 "num_base_bdevs": 2, 00:07:50.559 "num_base_bdevs_discovered": 1, 00:07:50.559 "num_base_bdevs_operational": 1, 00:07:50.559 "base_bdevs_list": [ 00:07:50.559 { 00:07:50.559 "name": null, 00:07:50.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.559 "is_configured": false, 00:07:50.559 "data_offset": 0, 00:07:50.559 "data_size": 65536 00:07:50.559 }, 00:07:50.559 { 00:07:50.559 "name": "BaseBdev2", 00:07:50.559 "uuid": "c6f3e17b-da84-4cab-a4d6-39e726fd98f4", 00:07:50.559 "is_configured": true, 00:07:50.559 "data_offset": 0, 00:07:50.559 "data_size": 65536 00:07:50.559 } 00:07:50.559 ] 00:07:50.559 }' 00:07:50.559 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.559 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.817 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.817 [2024-12-12 09:21:24.813824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:50.817 [2024-12-12 09:21:24.814067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.076 [2024-12-12 09:21:24.913249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.076 [2024-12-12 09:21:24.913366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.076 [2024-12-12 09:21:24.913409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63851 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63851 ']' 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63851 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.076 09:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63851 00:07:51.076 killing process with pid 63851 00:07:51.076 09:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.076 09:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.076 09:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63851' 00:07:51.076 09:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63851 00:07:51.076 [2024-12-12 09:21:25.012459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.077 09:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63851 00:07:51.077 [2024-12-12 09:21:25.030604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.456 ************************************ 00:07:52.456 END TEST raid_state_function_test 00:07:52.456 ************************************ 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.456 00:07:52.456 real 0m5.124s 00:07:52.456 user 0m7.248s 00:07:52.456 sys 0m0.893s 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 09:21:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:52.456 09:21:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.456 09:21:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.456 09:21:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 ************************************ 00:07:52.456 START TEST raid_state_function_test_sb 00:07:52.456 ************************************ 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:52.456 Process raid pid: 64103 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64103 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64103' 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64103 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64103 ']' 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.456 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 [2024-12-12 09:21:26.410506] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:52.456 [2024-12-12 09:21:26.410694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.714 [2024-12-12 09:21:26.586321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.714 [2024-12-12 09:21:26.722554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.972 [2024-12-12 09:21:26.950231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.973 [2024-12-12 09:21:26.950290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.231 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.231 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:53.231 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.231 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.231 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.489 [2024-12-12 09:21:27.254977] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.489 [2024-12-12 09:21:27.255091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.489 [2024-12-12 09:21:27.255121] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.489 [2024-12-12 09:21:27.255145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.489 "name": "Existed_Raid", 00:07:53.489 "uuid": "1211eb18-403f-42af-884e-732badea617d", 00:07:53.489 "strip_size_kb": 0, 00:07:53.489 "state": "configuring", 00:07:53.489 "raid_level": "raid1", 00:07:53.489 "superblock": true, 00:07:53.489 "num_base_bdevs": 2, 00:07:53.489 "num_base_bdevs_discovered": 0, 00:07:53.489 "num_base_bdevs_operational": 2, 00:07:53.489 "base_bdevs_list": [ 00:07:53.489 { 00:07:53.489 "name": "BaseBdev1", 00:07:53.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.489 "is_configured": false, 00:07:53.489 "data_offset": 0, 00:07:53.489 "data_size": 0 00:07:53.489 }, 00:07:53.489 { 00:07:53.489 "name": "BaseBdev2", 00:07:53.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.489 "is_configured": false, 00:07:53.489 "data_offset": 0, 00:07:53.489 "data_size": 0 00:07:53.489 } 00:07:53.489 ] 00:07:53.489 }' 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.489 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.747 [2024-12-12 09:21:27.698139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.747 [2024-12-12 09:21:27.698237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.747 [2024-12-12 09:21:27.710107] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.747 [2024-12-12 09:21:27.710186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.747 [2024-12-12 09:21:27.710213] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.747 [2024-12-12 09:21:27.710238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.747 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.747 [2024-12-12 09:21:27.763625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.747 BaseBdev1 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.748 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.006 [ 00:07:54.006 { 00:07:54.006 "name": "BaseBdev1", 00:07:54.006 "aliases": [ 00:07:54.006 "52e0aa3e-23be-4ffa-8073-438134453bbd" 00:07:54.006 ], 00:07:54.006 "product_name": "Malloc disk", 00:07:54.006 "block_size": 512, 00:07:54.006 "num_blocks": 65536, 00:07:54.006 "uuid": "52e0aa3e-23be-4ffa-8073-438134453bbd", 00:07:54.006 "assigned_rate_limits": { 00:07:54.006 "rw_ios_per_sec": 0, 00:07:54.006 "rw_mbytes_per_sec": 0, 00:07:54.006 "r_mbytes_per_sec": 0, 00:07:54.006 "w_mbytes_per_sec": 0 00:07:54.006 }, 00:07:54.006 "claimed": true, 00:07:54.006 "claim_type": "exclusive_write", 00:07:54.006 "zoned": false, 00:07:54.006 "supported_io_types": { 00:07:54.006 "read": true, 00:07:54.006 "write": true, 00:07:54.006 "unmap": true, 00:07:54.006 "flush": true, 00:07:54.006 "reset": true, 00:07:54.006 "nvme_admin": false, 00:07:54.006 "nvme_io": false, 00:07:54.006 "nvme_io_md": false, 00:07:54.006 "write_zeroes": true, 00:07:54.006 "zcopy": true, 00:07:54.006 "get_zone_info": false, 00:07:54.006 "zone_management": false, 00:07:54.006 "zone_append": false, 00:07:54.006 "compare": false, 00:07:54.006 "compare_and_write": false, 00:07:54.006 "abort": true, 00:07:54.006 "seek_hole": false, 00:07:54.006 "seek_data": false, 00:07:54.006 "copy": true, 00:07:54.006 "nvme_iov_md": false 00:07:54.006 }, 00:07:54.006 "memory_domains": [ 00:07:54.006 { 00:07:54.006 "dma_device_id": "system", 00:07:54.006 "dma_device_type": 1 00:07:54.006 }, 00:07:54.006 { 00:07:54.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.006 "dma_device_type": 2 00:07:54.006 } 00:07:54.006 ], 00:07:54.006 "driver_specific": {} 00:07:54.006 } 00:07:54.006 ] 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.006 "name": "Existed_Raid", 00:07:54.006 "uuid": "269401c3-b20f-44b3-8cc3-7eb1701a8e45", 00:07:54.006 "strip_size_kb": 0, 00:07:54.006 "state": "configuring", 00:07:54.006 "raid_level": "raid1", 00:07:54.006 "superblock": true, 00:07:54.006 "num_base_bdevs": 2, 00:07:54.006 "num_base_bdevs_discovered": 1, 00:07:54.006 "num_base_bdevs_operational": 2, 00:07:54.006 "base_bdevs_list": [ 00:07:54.006 { 00:07:54.006 "name": "BaseBdev1", 00:07:54.006 "uuid": "52e0aa3e-23be-4ffa-8073-438134453bbd", 00:07:54.006 "is_configured": true, 00:07:54.006 "data_offset": 2048, 00:07:54.006 "data_size": 63488 00:07:54.006 }, 00:07:54.006 { 00:07:54.006 "name": "BaseBdev2", 00:07:54.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.006 "is_configured": false, 00:07:54.006 "data_offset": 0, 00:07:54.006 "data_size": 0 00:07:54.006 } 00:07:54.006 ] 00:07:54.006 }' 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.006 09:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 [2024-12-12 09:21:28.258892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.264 [2024-12-12 09:21:28.259076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 [2024-12-12 09:21:28.270936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.264 [2024-12-12 09:21:28.273260] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.264 [2024-12-12 09:21:28.273346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.522 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.522 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.522 "name": "Existed_Raid", 00:07:54.522 "uuid": "55466eda-cd7a-498c-b568-a0913de79422", 00:07:54.522 "strip_size_kb": 0, 00:07:54.522 "state": "configuring", 00:07:54.522 "raid_level": "raid1", 00:07:54.522 "superblock": true, 00:07:54.522 "num_base_bdevs": 2, 00:07:54.522 "num_base_bdevs_discovered": 1, 00:07:54.522 "num_base_bdevs_operational": 2, 00:07:54.522 "base_bdevs_list": [ 00:07:54.522 { 00:07:54.522 "name": "BaseBdev1", 00:07:54.522 "uuid": "52e0aa3e-23be-4ffa-8073-438134453bbd", 00:07:54.522 "is_configured": true, 00:07:54.522 "data_offset": 2048, 00:07:54.522 "data_size": 63488 00:07:54.522 }, 00:07:54.522 { 00:07:54.522 "name": "BaseBdev2", 00:07:54.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.522 "is_configured": false, 00:07:54.522 "data_offset": 0, 00:07:54.522 "data_size": 0 00:07:54.522 } 00:07:54.522 ] 00:07:54.522 }' 00:07:54.522 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.522 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.781 [2024-12-12 09:21:28.742175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.781 [2024-12-12 09:21:28.742578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.781 [2024-12-12 09:21:28.742634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.781 [2024-12-12 09:21:28.742946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.781 BaseBdev2 00:07:54.781 [2024-12-12 09:21:28.743205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.781 [2024-12-12 09:21:28.743222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:54.781 [2024-12-12 09:21:28.743383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.781 [ 00:07:54.781 { 00:07:54.781 "name": "BaseBdev2", 00:07:54.781 "aliases": [ 00:07:54.781 "e1670df4-3ba5-47fb-a541-623f6992db08" 00:07:54.781 ], 00:07:54.781 "product_name": "Malloc disk", 00:07:54.781 "block_size": 512, 00:07:54.781 "num_blocks": 65536, 00:07:54.781 "uuid": "e1670df4-3ba5-47fb-a541-623f6992db08", 00:07:54.781 "assigned_rate_limits": { 00:07:54.781 "rw_ios_per_sec": 0, 00:07:54.781 "rw_mbytes_per_sec": 0, 00:07:54.781 "r_mbytes_per_sec": 0, 00:07:54.781 "w_mbytes_per_sec": 0 00:07:54.781 }, 00:07:54.781 "claimed": true, 00:07:54.781 "claim_type": "exclusive_write", 00:07:54.781 "zoned": false, 00:07:54.781 "supported_io_types": { 00:07:54.781 "read": true, 00:07:54.781 "write": true, 00:07:54.781 "unmap": true, 00:07:54.781 "flush": true, 00:07:54.781 "reset": true, 00:07:54.781 "nvme_admin": false, 00:07:54.781 "nvme_io": false, 00:07:54.781 "nvme_io_md": false, 00:07:54.781 "write_zeroes": true, 00:07:54.781 "zcopy": true, 00:07:54.781 "get_zone_info": false, 00:07:54.781 "zone_management": false, 00:07:54.781 "zone_append": false, 00:07:54.781 "compare": false, 00:07:54.781 "compare_and_write": false, 00:07:54.781 "abort": true, 00:07:54.781 "seek_hole": false, 00:07:54.781 "seek_data": false, 00:07:54.781 "copy": true, 00:07:54.781 "nvme_iov_md": false 00:07:54.781 }, 00:07:54.781 "memory_domains": [ 00:07:54.781 { 00:07:54.781 "dma_device_id": "system", 00:07:54.781 "dma_device_type": 1 00:07:54.781 }, 00:07:54.781 { 00:07:54.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.781 "dma_device_type": 2 00:07:54.781 } 00:07:54.781 ], 00:07:54.781 "driver_specific": {} 00:07:54.781 } 00:07:54.781 ] 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.781 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.039 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.039 "name": "Existed_Raid", 00:07:55.039 "uuid": "55466eda-cd7a-498c-b568-a0913de79422", 00:07:55.039 "strip_size_kb": 0, 00:07:55.039 "state": "online", 00:07:55.039 "raid_level": "raid1", 00:07:55.039 "superblock": true, 00:07:55.039 "num_base_bdevs": 2, 00:07:55.039 "num_base_bdevs_discovered": 2, 00:07:55.039 "num_base_bdevs_operational": 2, 00:07:55.039 "base_bdevs_list": [ 00:07:55.039 { 00:07:55.039 "name": "BaseBdev1", 00:07:55.039 "uuid": "52e0aa3e-23be-4ffa-8073-438134453bbd", 00:07:55.039 "is_configured": true, 00:07:55.039 "data_offset": 2048, 00:07:55.039 "data_size": 63488 00:07:55.039 }, 00:07:55.039 { 00:07:55.039 "name": "BaseBdev2", 00:07:55.039 "uuid": "e1670df4-3ba5-47fb-a541-623f6992db08", 00:07:55.039 "is_configured": true, 00:07:55.039 "data_offset": 2048, 00:07:55.039 "data_size": 63488 00:07:55.039 } 00:07:55.039 ] 00:07:55.039 }' 00:07:55.039 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.039 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.297 [2024-12-12 09:21:29.277606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.297 "name": "Existed_Raid", 00:07:55.297 "aliases": [ 00:07:55.297 "55466eda-cd7a-498c-b568-a0913de79422" 00:07:55.297 ], 00:07:55.297 "product_name": "Raid Volume", 00:07:55.297 "block_size": 512, 00:07:55.297 "num_blocks": 63488, 00:07:55.297 "uuid": "55466eda-cd7a-498c-b568-a0913de79422", 00:07:55.297 "assigned_rate_limits": { 00:07:55.297 "rw_ios_per_sec": 0, 00:07:55.297 "rw_mbytes_per_sec": 0, 00:07:55.297 "r_mbytes_per_sec": 0, 00:07:55.297 "w_mbytes_per_sec": 0 00:07:55.297 }, 00:07:55.297 "claimed": false, 00:07:55.297 "zoned": false, 00:07:55.297 "supported_io_types": { 00:07:55.297 "read": true, 00:07:55.297 "write": true, 00:07:55.297 "unmap": false, 00:07:55.297 "flush": false, 00:07:55.297 "reset": true, 00:07:55.297 "nvme_admin": false, 00:07:55.297 "nvme_io": false, 00:07:55.297 "nvme_io_md": false, 00:07:55.297 "write_zeroes": true, 00:07:55.297 "zcopy": false, 00:07:55.297 "get_zone_info": false, 00:07:55.297 "zone_management": false, 00:07:55.297 "zone_append": false, 00:07:55.297 "compare": false, 00:07:55.297 "compare_and_write": false, 00:07:55.297 "abort": false, 00:07:55.297 "seek_hole": false, 00:07:55.297 "seek_data": false, 00:07:55.297 "copy": false, 00:07:55.297 "nvme_iov_md": false 00:07:55.297 }, 00:07:55.297 "memory_domains": [ 00:07:55.297 { 00:07:55.297 "dma_device_id": "system", 00:07:55.297 "dma_device_type": 1 00:07:55.297 }, 00:07:55.297 { 00:07:55.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.297 "dma_device_type": 2 00:07:55.297 }, 00:07:55.297 { 00:07:55.297 "dma_device_id": "system", 00:07:55.297 "dma_device_type": 1 00:07:55.297 }, 00:07:55.297 { 00:07:55.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.297 "dma_device_type": 2 00:07:55.297 } 00:07:55.297 ], 00:07:55.297 "driver_specific": { 00:07:55.297 "raid": { 00:07:55.297 "uuid": "55466eda-cd7a-498c-b568-a0913de79422", 00:07:55.297 "strip_size_kb": 0, 00:07:55.297 "state": "online", 00:07:55.297 "raid_level": "raid1", 00:07:55.297 "superblock": true, 00:07:55.297 "num_base_bdevs": 2, 00:07:55.297 "num_base_bdevs_discovered": 2, 00:07:55.297 "num_base_bdevs_operational": 2, 00:07:55.297 "base_bdevs_list": [ 00:07:55.297 { 00:07:55.297 "name": "BaseBdev1", 00:07:55.297 "uuid": "52e0aa3e-23be-4ffa-8073-438134453bbd", 00:07:55.297 "is_configured": true, 00:07:55.297 "data_offset": 2048, 00:07:55.297 "data_size": 63488 00:07:55.297 }, 00:07:55.297 { 00:07:55.297 "name": "BaseBdev2", 00:07:55.297 "uuid": "e1670df4-3ba5-47fb-a541-623f6992db08", 00:07:55.297 "is_configured": true, 00:07:55.297 "data_offset": 2048, 00:07:55.297 "data_size": 63488 00:07:55.297 } 00:07:55.297 ] 00:07:55.297 } 00:07:55.297 } 00:07:55.297 }' 00:07:55.297 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:55.556 BaseBdev2' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.556 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.556 [2024-12-12 09:21:29.529018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.814 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.815 "name": "Existed_Raid", 00:07:55.815 "uuid": "55466eda-cd7a-498c-b568-a0913de79422", 00:07:55.815 "strip_size_kb": 0, 00:07:55.815 "state": "online", 00:07:55.815 "raid_level": "raid1", 00:07:55.815 "superblock": true, 00:07:55.815 "num_base_bdevs": 2, 00:07:55.815 "num_base_bdevs_discovered": 1, 00:07:55.815 "num_base_bdevs_operational": 1, 00:07:55.815 "base_bdevs_list": [ 00:07:55.815 { 00:07:55.815 "name": null, 00:07:55.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.815 "is_configured": false, 00:07:55.815 "data_offset": 0, 00:07:55.815 "data_size": 63488 00:07:55.815 }, 00:07:55.815 { 00:07:55.815 "name": "BaseBdev2", 00:07:55.815 "uuid": "e1670df4-3ba5-47fb-a541-623f6992db08", 00:07:55.815 "is_configured": true, 00:07:55.815 "data_offset": 2048, 00:07:55.815 "data_size": 63488 00:07:55.815 } 00:07:55.815 ] 00:07:55.815 }' 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.815 09:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:56.072 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.328 [2024-12-12 09:21:30.114541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.328 [2024-12-12 09:21:30.114707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.328 [2024-12-12 09:21:30.218077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.328 [2024-12-12 09:21:30.218190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.328 [2024-12-12 09:21:30.218237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64103 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64103 ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64103 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64103 00:07:56.328 killing process with pid 64103 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64103' 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64103 00:07:56.328 [2024-12-12 09:21:30.316521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.328 09:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64103 00:07:56.328 [2024-12-12 09:21:30.333082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.698 09:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.698 00:07:57.698 real 0m5.204s 00:07:57.698 user 0m7.393s 00:07:57.698 sys 0m0.893s 00:07:57.698 09:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.698 09:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.698 ************************************ 00:07:57.698 END TEST raid_state_function_test_sb 00:07:57.698 ************************************ 00:07:57.698 09:21:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:57.698 09:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:57.698 09:21:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.698 09:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.698 ************************************ 00:07:57.698 START TEST raid_superblock_test 00:07:57.698 ************************************ 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64351 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64351 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64351 ']' 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.698 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.698 [2024-12-12 09:21:31.678826] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:57.698 [2024-12-12 09:21:31.678937] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64351 ] 00:07:57.956 [2024-12-12 09:21:31.854513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.230 [2024-12-12 09:21:31.994527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.230 [2024-12-12 09:21:32.228510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.230 [2024-12-12 09:21:32.228553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.505 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.764 malloc1 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.764 [2024-12-12 09:21:32.548596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.764 [2024-12-12 09:21:32.548760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.764 [2024-12-12 09:21:32.548802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:58.764 [2024-12-12 09:21:32.548831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.764 [2024-12-12 09:21:32.551290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.764 [2024-12-12 09:21:32.551373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.764 pt1 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.764 malloc2 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.764 [2024-12-12 09:21:32.608933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.764 [2024-12-12 09:21:32.609060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.764 [2024-12-12 09:21:32.609105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:58.764 [2024-12-12 09:21:32.609144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.764 [2024-12-12 09:21:32.611508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.764 [2024-12-12 09:21:32.611577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.764 pt2 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.764 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.764 [2024-12-12 09:21:32.620976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.764 [2024-12-12 09:21:32.623049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.764 [2024-12-12 09:21:32.623250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:58.764 [2024-12-12 09:21:32.623301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.764 [2024-12-12 09:21:32.623578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.764 [2024-12-12 09:21:32.623783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:58.764 [2024-12-12 09:21:32.623832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:58.764 [2024-12-12 09:21:32.624023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.765 "name": "raid_bdev1", 00:07:58.765 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:07:58.765 "strip_size_kb": 0, 00:07:58.765 "state": "online", 00:07:58.765 "raid_level": "raid1", 00:07:58.765 "superblock": true, 00:07:58.765 "num_base_bdevs": 2, 00:07:58.765 "num_base_bdevs_discovered": 2, 00:07:58.765 "num_base_bdevs_operational": 2, 00:07:58.765 "base_bdevs_list": [ 00:07:58.765 { 00:07:58.765 "name": "pt1", 00:07:58.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.765 "is_configured": true, 00:07:58.765 "data_offset": 2048, 00:07:58.765 "data_size": 63488 00:07:58.765 }, 00:07:58.765 { 00:07:58.765 "name": "pt2", 00:07:58.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.765 "is_configured": true, 00:07:58.765 "data_offset": 2048, 00:07:58.765 "data_size": 63488 00:07:58.765 } 00:07:58.765 ] 00:07:58.765 }' 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.765 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 [2024-12-12 09:21:33.068429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.333 "name": "raid_bdev1", 00:07:59.333 "aliases": [ 00:07:59.333 "b04fd84a-0ff4-4cb5-9927-99c0c873918c" 00:07:59.333 ], 00:07:59.333 "product_name": "Raid Volume", 00:07:59.333 "block_size": 512, 00:07:59.333 "num_blocks": 63488, 00:07:59.333 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:07:59.333 "assigned_rate_limits": { 00:07:59.333 "rw_ios_per_sec": 0, 00:07:59.333 "rw_mbytes_per_sec": 0, 00:07:59.333 "r_mbytes_per_sec": 0, 00:07:59.333 "w_mbytes_per_sec": 0 00:07:59.333 }, 00:07:59.333 "claimed": false, 00:07:59.333 "zoned": false, 00:07:59.333 "supported_io_types": { 00:07:59.333 "read": true, 00:07:59.333 "write": true, 00:07:59.333 "unmap": false, 00:07:59.333 "flush": false, 00:07:59.333 "reset": true, 00:07:59.333 "nvme_admin": false, 00:07:59.333 "nvme_io": false, 00:07:59.333 "nvme_io_md": false, 00:07:59.333 "write_zeroes": true, 00:07:59.333 "zcopy": false, 00:07:59.333 "get_zone_info": false, 00:07:59.333 "zone_management": false, 00:07:59.333 "zone_append": false, 00:07:59.333 "compare": false, 00:07:59.333 "compare_and_write": false, 00:07:59.333 "abort": false, 00:07:59.333 "seek_hole": false, 00:07:59.333 "seek_data": false, 00:07:59.333 "copy": false, 00:07:59.333 "nvme_iov_md": false 00:07:59.333 }, 00:07:59.333 "memory_domains": [ 00:07:59.333 { 00:07:59.333 "dma_device_id": "system", 00:07:59.333 "dma_device_type": 1 00:07:59.333 }, 00:07:59.333 { 00:07:59.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.333 "dma_device_type": 2 00:07:59.333 }, 00:07:59.333 { 00:07:59.333 "dma_device_id": "system", 00:07:59.333 "dma_device_type": 1 00:07:59.333 }, 00:07:59.333 { 00:07:59.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.333 "dma_device_type": 2 00:07:59.333 } 00:07:59.333 ], 00:07:59.333 "driver_specific": { 00:07:59.333 "raid": { 00:07:59.333 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:07:59.333 "strip_size_kb": 0, 00:07:59.333 "state": "online", 00:07:59.333 "raid_level": "raid1", 00:07:59.333 "superblock": true, 00:07:59.333 "num_base_bdevs": 2, 00:07:59.333 "num_base_bdevs_discovered": 2, 00:07:59.333 "num_base_bdevs_operational": 2, 00:07:59.333 "base_bdevs_list": [ 00:07:59.333 { 00:07:59.333 "name": "pt1", 00:07:59.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.333 "is_configured": true, 00:07:59.333 "data_offset": 2048, 00:07:59.333 "data_size": 63488 00:07:59.333 }, 00:07:59.333 { 00:07:59.333 "name": "pt2", 00:07:59.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.333 "is_configured": true, 00:07:59.333 "data_offset": 2048, 00:07:59.333 "data_size": 63488 00:07:59.333 } 00:07:59.333 ] 00:07:59.333 } 00:07:59.333 } 00:07:59.333 }' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:59.333 pt2' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 [2024-12-12 09:21:33.288213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b04fd84a-0ff4-4cb5-9927-99c0c873918c 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b04fd84a-0ff4-4cb5-9927-99c0c873918c ']' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 [2024-12-12 09:21:33.335729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.333 [2024-12-12 09:21:33.335813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.333 [2024-12-12 09:21:33.335952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.333 [2024-12-12 09:21:33.336058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.333 [2024-12-12 09:21:33.336107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.333 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 [2024-12-12 09:21:33.475607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.594 [2024-12-12 09:21:33.477852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.594 [2024-12-12 09:21:33.478007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:59.594 [2024-12-12 09:21:33.478122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:59.594 [2024-12-12 09:21:33.478187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.594 [2024-12-12 09:21:33.478219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:59.594 request: 00:07:59.594 { 00:07:59.594 "name": "raid_bdev1", 00:07:59.594 "raid_level": "raid1", 00:07:59.594 "base_bdevs": [ 00:07:59.594 "malloc1", 00:07:59.594 "malloc2" 00:07:59.594 ], 00:07:59.594 "superblock": false, 00:07:59.594 "method": "bdev_raid_create", 00:07:59.594 "req_id": 1 00:07:59.594 } 00:07:59.594 Got JSON-RPC error response 00:07:59.594 response: 00:07:59.594 { 00:07:59.594 "code": -17, 00:07:59.594 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.594 } 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 [2024-12-12 09:21:33.539399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.594 [2024-12-12 09:21:33.539492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.594 [2024-12-12 09:21:33.539525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:59.594 [2024-12-12 09:21:33.539554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.594 [2024-12-12 09:21:33.541993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.594 [2024-12-12 09:21:33.542059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.594 [2024-12-12 09:21:33.542174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.595 [2024-12-12 09:21:33.542261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.595 pt1 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.595 "name": "raid_bdev1", 00:07:59.595 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:07:59.595 "strip_size_kb": 0, 00:07:59.595 "state": "configuring", 00:07:59.595 "raid_level": "raid1", 00:07:59.595 "superblock": true, 00:07:59.595 "num_base_bdevs": 2, 00:07:59.595 "num_base_bdevs_discovered": 1, 00:07:59.595 "num_base_bdevs_operational": 2, 00:07:59.595 "base_bdevs_list": [ 00:07:59.595 { 00:07:59.595 "name": "pt1", 00:07:59.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.595 "is_configured": true, 00:07:59.595 "data_offset": 2048, 00:07:59.595 "data_size": 63488 00:07:59.595 }, 00:07:59.595 { 00:07:59.595 "name": null, 00:07:59.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.595 "is_configured": false, 00:07:59.595 "data_offset": 2048, 00:07:59.595 "data_size": 63488 00:07:59.595 } 00:07:59.595 ] 00:07:59.595 }' 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.595 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.163 [2024-12-12 09:21:33.982761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.163 [2024-12-12 09:21:33.982932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.163 [2024-12-12 09:21:33.982992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:00.163 [2024-12-12 09:21:33.983030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.163 [2024-12-12 09:21:33.983601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.163 [2024-12-12 09:21:33.983666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.163 [2024-12-12 09:21:33.983814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:00.163 [2024-12-12 09:21:33.983876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.163 [2024-12-12 09:21:33.984049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.163 [2024-12-12 09:21:33.984093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.163 [2024-12-12 09:21:33.984390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:00.163 [2024-12-12 09:21:33.984605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.163 [2024-12-12 09:21:33.984641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.163 [2024-12-12 09:21:33.984834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.163 pt2 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.163 09:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.163 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.163 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.163 "name": "raid_bdev1", 00:08:00.163 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:00.163 "strip_size_kb": 0, 00:08:00.163 "state": "online", 00:08:00.163 "raid_level": "raid1", 00:08:00.163 "superblock": true, 00:08:00.163 "num_base_bdevs": 2, 00:08:00.163 "num_base_bdevs_discovered": 2, 00:08:00.163 "num_base_bdevs_operational": 2, 00:08:00.163 "base_bdevs_list": [ 00:08:00.163 { 00:08:00.163 "name": "pt1", 00:08:00.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.163 "is_configured": true, 00:08:00.163 "data_offset": 2048, 00:08:00.163 "data_size": 63488 00:08:00.163 }, 00:08:00.163 { 00:08:00.163 "name": "pt2", 00:08:00.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.163 "is_configured": true, 00:08:00.163 "data_offset": 2048, 00:08:00.163 "data_size": 63488 00:08:00.163 } 00:08:00.163 ] 00:08:00.163 }' 00:08:00.163 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.163 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.423 [2024-12-12 09:21:34.410286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.423 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.682 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.682 "name": "raid_bdev1", 00:08:00.682 "aliases": [ 00:08:00.682 "b04fd84a-0ff4-4cb5-9927-99c0c873918c" 00:08:00.682 ], 00:08:00.682 "product_name": "Raid Volume", 00:08:00.682 "block_size": 512, 00:08:00.682 "num_blocks": 63488, 00:08:00.682 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:00.682 "assigned_rate_limits": { 00:08:00.682 "rw_ios_per_sec": 0, 00:08:00.682 "rw_mbytes_per_sec": 0, 00:08:00.682 "r_mbytes_per_sec": 0, 00:08:00.682 "w_mbytes_per_sec": 0 00:08:00.682 }, 00:08:00.682 "claimed": false, 00:08:00.682 "zoned": false, 00:08:00.682 "supported_io_types": { 00:08:00.682 "read": true, 00:08:00.682 "write": true, 00:08:00.682 "unmap": false, 00:08:00.682 "flush": false, 00:08:00.682 "reset": true, 00:08:00.682 "nvme_admin": false, 00:08:00.682 "nvme_io": false, 00:08:00.683 "nvme_io_md": false, 00:08:00.683 "write_zeroes": true, 00:08:00.683 "zcopy": false, 00:08:00.683 "get_zone_info": false, 00:08:00.683 "zone_management": false, 00:08:00.683 "zone_append": false, 00:08:00.683 "compare": false, 00:08:00.683 "compare_and_write": false, 00:08:00.683 "abort": false, 00:08:00.683 "seek_hole": false, 00:08:00.683 "seek_data": false, 00:08:00.683 "copy": false, 00:08:00.683 "nvme_iov_md": false 00:08:00.683 }, 00:08:00.683 "memory_domains": [ 00:08:00.683 { 00:08:00.683 "dma_device_id": "system", 00:08:00.683 "dma_device_type": 1 00:08:00.683 }, 00:08:00.683 { 00:08:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.683 "dma_device_type": 2 00:08:00.683 }, 00:08:00.683 { 00:08:00.683 "dma_device_id": "system", 00:08:00.683 "dma_device_type": 1 00:08:00.683 }, 00:08:00.683 { 00:08:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.683 "dma_device_type": 2 00:08:00.683 } 00:08:00.683 ], 00:08:00.683 "driver_specific": { 00:08:00.683 "raid": { 00:08:00.683 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:00.683 "strip_size_kb": 0, 00:08:00.683 "state": "online", 00:08:00.683 "raid_level": "raid1", 00:08:00.683 "superblock": true, 00:08:00.683 "num_base_bdevs": 2, 00:08:00.683 "num_base_bdevs_discovered": 2, 00:08:00.683 "num_base_bdevs_operational": 2, 00:08:00.683 "base_bdevs_list": [ 00:08:00.683 { 00:08:00.683 "name": "pt1", 00:08:00.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.683 "is_configured": true, 00:08:00.683 "data_offset": 2048, 00:08:00.683 "data_size": 63488 00:08:00.683 }, 00:08:00.683 { 00:08:00.683 "name": "pt2", 00:08:00.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.683 "is_configured": true, 00:08:00.683 "data_offset": 2048, 00:08:00.683 "data_size": 63488 00:08:00.683 } 00:08:00.683 ] 00:08:00.683 } 00:08:00.683 } 00:08:00.683 }' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:00.683 pt2' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 [2024-12-12 09:21:34.657764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b04fd84a-0ff4-4cb5-9927-99c0c873918c '!=' b04fd84a-0ff4-4cb5-9927-99c0c873918c ']' 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 [2024-12-12 09:21:34.693503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.683 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.943 "name": "raid_bdev1", 00:08:00.943 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:00.943 "strip_size_kb": 0, 00:08:00.943 "state": "online", 00:08:00.943 "raid_level": "raid1", 00:08:00.943 "superblock": true, 00:08:00.943 "num_base_bdevs": 2, 00:08:00.943 "num_base_bdevs_discovered": 1, 00:08:00.943 "num_base_bdevs_operational": 1, 00:08:00.943 "base_bdevs_list": [ 00:08:00.943 { 00:08:00.943 "name": null, 00:08:00.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.943 "is_configured": false, 00:08:00.943 "data_offset": 0, 00:08:00.943 "data_size": 63488 00:08:00.943 }, 00:08:00.943 { 00:08:00.943 "name": "pt2", 00:08:00.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.943 "is_configured": true, 00:08:00.943 "data_offset": 2048, 00:08:00.943 "data_size": 63488 00:08:00.943 } 00:08:00.943 ] 00:08:00.943 }' 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.943 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.202 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.202 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.202 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.202 [2024-12-12 09:21:35.136739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.202 [2024-12-12 09:21:35.136821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.202 [2024-12-12 09:21:35.136926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.202 [2024-12-12 09:21:35.137020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.202 [2024-12-12 09:21:35.137070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.202 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.202 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.203 [2024-12-12 09:21:35.212573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.203 [2024-12-12 09:21:35.212671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.203 [2024-12-12 09:21:35.212704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:01.203 [2024-12-12 09:21:35.212735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.203 [2024-12-12 09:21:35.215223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.203 [2024-12-12 09:21:35.215292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.203 [2024-12-12 09:21:35.215378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:01.203 [2024-12-12 09:21:35.215425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.203 [2024-12-12 09:21:35.215554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:01.203 [2024-12-12 09:21:35.215566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.203 [2024-12-12 09:21:35.215816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:01.203 [2024-12-12 09:21:35.216002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:01.203 [2024-12-12 09:21:35.216013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:01.203 [2024-12-12 09:21:35.216152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.203 pt2 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.203 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.462 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.462 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.462 "name": "raid_bdev1", 00:08:01.462 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:01.462 "strip_size_kb": 0, 00:08:01.462 "state": "online", 00:08:01.462 "raid_level": "raid1", 00:08:01.462 "superblock": true, 00:08:01.462 "num_base_bdevs": 2, 00:08:01.462 "num_base_bdevs_discovered": 1, 00:08:01.462 "num_base_bdevs_operational": 1, 00:08:01.462 "base_bdevs_list": [ 00:08:01.462 { 00:08:01.462 "name": null, 00:08:01.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.462 "is_configured": false, 00:08:01.462 "data_offset": 2048, 00:08:01.462 "data_size": 63488 00:08:01.462 }, 00:08:01.462 { 00:08:01.462 "name": "pt2", 00:08:01.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.462 "is_configured": true, 00:08:01.462 "data_offset": 2048, 00:08:01.462 "data_size": 63488 00:08:01.462 } 00:08:01.462 ] 00:08:01.462 }' 00:08:01.462 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.462 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.721 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.721 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.721 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.721 [2024-12-12 09:21:35.651855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.721 [2024-12-12 09:21:35.651951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.721 [2024-12-12 09:21:35.652060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.722 [2024-12-12 09:21:35.652135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.722 [2024-12-12 09:21:35.652194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.722 [2024-12-12 09:21:35.711821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.722 [2024-12-12 09:21:35.711997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.722 [2024-12-12 09:21:35.712047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:01.722 [2024-12-12 09:21:35.712087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.722 [2024-12-12 09:21:35.714634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.722 [2024-12-12 09:21:35.714707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.722 [2024-12-12 09:21:35.714849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:01.722 [2024-12-12 09:21:35.714953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.722 [2024-12-12 09:21:35.715167] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:01.722 [2024-12-12 09:21:35.715222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.722 [2024-12-12 09:21:35.715266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:01.722 [2024-12-12 09:21:35.715358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.722 [2024-12-12 09:21:35.715471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:01.722 [2024-12-12 09:21:35.715506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.722 [2024-12-12 09:21:35.715796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:01.722 [2024-12-12 09:21:35.716012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:01.722 [2024-12-12 09:21:35.716060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:01.722 [2024-12-12 09:21:35.716320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.722 pt1 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.722 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.980 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.980 "name": "raid_bdev1", 00:08:01.980 "uuid": "b04fd84a-0ff4-4cb5-9927-99c0c873918c", 00:08:01.980 "strip_size_kb": 0, 00:08:01.980 "state": "online", 00:08:01.980 "raid_level": "raid1", 00:08:01.980 "superblock": true, 00:08:01.980 "num_base_bdevs": 2, 00:08:01.980 "num_base_bdevs_discovered": 1, 00:08:01.980 "num_base_bdevs_operational": 1, 00:08:01.981 "base_bdevs_list": [ 00:08:01.981 { 00:08:01.981 "name": null, 00:08:01.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.981 "is_configured": false, 00:08:01.981 "data_offset": 2048, 00:08:01.981 "data_size": 63488 00:08:01.981 }, 00:08:01.981 { 00:08:01.981 "name": "pt2", 00:08:01.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.981 "is_configured": true, 00:08:01.981 "data_offset": 2048, 00:08:01.981 "data_size": 63488 00:08:01.981 } 00:08:01.981 ] 00:08:01.981 }' 00:08:01.981 09:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.981 09:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.240 [2024-12-12 09:21:36.215890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b04fd84a-0ff4-4cb5-9927-99c0c873918c '!=' b04fd84a-0ff4-4cb5-9927-99c0c873918c ']' 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64351 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64351 ']' 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64351 00:08:02.240 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64351 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64351' 00:08:02.499 killing process with pid 64351 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64351 00:08:02.499 [2024-12-12 09:21:36.298592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.499 [2024-12-12 09:21:36.298726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.499 09:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64351 00:08:02.499 [2024-12-12 09:21:36.298783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.499 [2024-12-12 09:21:36.298800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:02.499 [2024-12-12 09:21:36.520617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.878 09:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:03.878 00:08:03.878 real 0m6.135s 00:08:03.878 user 0m9.137s 00:08:03.878 sys 0m1.133s 00:08:03.878 ************************************ 00:08:03.878 END TEST raid_superblock_test 00:08:03.878 ************************************ 00:08:03.878 09:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.878 09:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.878 09:21:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:03.878 09:21:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.878 09:21:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.878 09:21:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.878 ************************************ 00:08:03.878 START TEST raid_read_error_test 00:08:03.878 ************************************ 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cRixLuVMW5 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64681 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64681 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64681 ']' 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.878 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.878 [2024-12-12 09:21:37.887428] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:03.878 [2024-12-12 09:21:37.887617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64681 ] 00:08:04.137 [2024-12-12 09:21:38.058626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.396 [2024-12-12 09:21:38.194861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.655 [2024-12-12 09:21:38.435056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.655 [2024-12-12 09:21:38.435176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 BaseBdev1_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 true 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 [2024-12-12 09:21:38.764285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.914 [2024-12-12 09:21:38.764438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.914 [2024-12-12 09:21:38.764477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.914 [2024-12-12 09:21:38.764507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.914 [2024-12-12 09:21:38.766922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.914 [2024-12-12 09:21:38.767016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.914 BaseBdev1 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 BaseBdev2_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 true 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 [2024-12-12 09:21:38.836328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.914 [2024-12-12 09:21:38.836473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.914 [2024-12-12 09:21:38.836497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.914 [2024-12-12 09:21:38.836508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.914 [2024-12-12 09:21:38.838938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.914 [2024-12-12 09:21:38.838997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.914 BaseBdev2 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 [2024-12-12 09:21:38.848392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.914 [2024-12-12 09:21:38.850629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.914 [2024-12-12 09:21:38.850884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.914 [2024-12-12 09:21:38.850934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.914 [2024-12-12 09:21:38.851243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:04.914 [2024-12-12 09:21:38.851481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.914 [2024-12-12 09:21:38.851525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.914 [2024-12-12 09:21:38.851730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.914 "name": "raid_bdev1", 00:08:04.914 "uuid": "a8911719-9ed2-4ce3-89d4-123e09d8c23e", 00:08:04.914 "strip_size_kb": 0, 00:08:04.914 "state": "online", 00:08:04.914 "raid_level": "raid1", 00:08:04.914 "superblock": true, 00:08:04.914 "num_base_bdevs": 2, 00:08:04.914 "num_base_bdevs_discovered": 2, 00:08:04.914 "num_base_bdevs_operational": 2, 00:08:04.914 "base_bdevs_list": [ 00:08:04.914 { 00:08:04.914 "name": "BaseBdev1", 00:08:04.914 "uuid": "21384d97-b7eb-5a68-8a60-c1944b617baf", 00:08:04.914 "is_configured": true, 00:08:04.914 "data_offset": 2048, 00:08:04.914 "data_size": 63488 00:08:04.914 }, 00:08:04.914 { 00:08:04.914 "name": "BaseBdev2", 00:08:04.914 "uuid": "c47d6fb1-b708-5c64-a4b0-8bcab52bb16d", 00:08:04.914 "is_configured": true, 00:08:04.914 "data_offset": 2048, 00:08:04.914 "data_size": 63488 00:08:04.914 } 00:08:04.914 ] 00:08:04.914 }' 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.914 09:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.482 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.482 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:05.482 [2024-12-12 09:21:39.400994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.421 "name": "raid_bdev1", 00:08:06.421 "uuid": "a8911719-9ed2-4ce3-89d4-123e09d8c23e", 00:08:06.421 "strip_size_kb": 0, 00:08:06.421 "state": "online", 00:08:06.421 "raid_level": "raid1", 00:08:06.421 "superblock": true, 00:08:06.421 "num_base_bdevs": 2, 00:08:06.421 "num_base_bdevs_discovered": 2, 00:08:06.421 "num_base_bdevs_operational": 2, 00:08:06.421 "base_bdevs_list": [ 00:08:06.421 { 00:08:06.421 "name": "BaseBdev1", 00:08:06.421 "uuid": "21384d97-b7eb-5a68-8a60-c1944b617baf", 00:08:06.421 "is_configured": true, 00:08:06.421 "data_offset": 2048, 00:08:06.421 "data_size": 63488 00:08:06.421 }, 00:08:06.421 { 00:08:06.421 "name": "BaseBdev2", 00:08:06.421 "uuid": "c47d6fb1-b708-5c64-a4b0-8bcab52bb16d", 00:08:06.421 "is_configured": true, 00:08:06.421 "data_offset": 2048, 00:08:06.421 "data_size": 63488 00:08:06.421 } 00:08:06.421 ] 00:08:06.421 }' 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.421 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.993 [2024-12-12 09:21:40.762770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.993 [2024-12-12 09:21:40.762909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.993 [2024-12-12 09:21:40.765691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.993 [2024-12-12 09:21:40.765753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.993 [2024-12-12 09:21:40.765841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.993 [2024-12-12 09:21:40.765855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.993 { 00:08:06.993 "results": [ 00:08:06.993 { 00:08:06.993 "job": "raid_bdev1", 00:08:06.993 "core_mask": "0x1", 00:08:06.993 "workload": "randrw", 00:08:06.993 "percentage": 50, 00:08:06.993 "status": "finished", 00:08:06.993 "queue_depth": 1, 00:08:06.993 "io_size": 131072, 00:08:06.993 "runtime": 1.362727, 00:08:06.993 "iops": 14216.347074652516, 00:08:06.993 "mibps": 1777.0433843315645, 00:08:06.993 "io_failed": 0, 00:08:06.993 "io_timeout": 0, 00:08:06.993 "avg_latency_us": 67.71084034706385, 00:08:06.993 "min_latency_us": 23.699563318777294, 00:08:06.993 "max_latency_us": 1416.6078602620087 00:08:06.993 } 00:08:06.993 ], 00:08:06.993 "core_count": 1 00:08:06.993 } 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64681 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64681 ']' 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64681 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64681 00:08:06.993 killing process with pid 64681 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64681' 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64681 00:08:06.993 [2024-12-12 09:21:40.814429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.993 09:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64681 00:08:06.993 [2024-12-12 09:21:40.965658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cRixLuVMW5 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.372 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.373 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.373 09:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.373 00:08:08.373 real 0m4.467s 00:08:08.373 user 0m5.248s 00:08:08.373 sys 0m0.608s 00:08:08.373 09:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.373 09:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.373 ************************************ 00:08:08.373 END TEST raid_read_error_test 00:08:08.373 ************************************ 00:08:08.373 09:21:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:08.373 09:21:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.373 09:21:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.373 09:21:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.373 ************************************ 00:08:08.373 START TEST raid_write_error_test 00:08:08.373 ************************************ 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aQaayDOkpN 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64821 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64821 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64821 ']' 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.373 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.632 [2024-12-12 09:21:42.439917] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:08.632 [2024-12-12 09:21:42.440596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64821 ] 00:08:08.632 [2024-12-12 09:21:42.616095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.891 [2024-12-12 09:21:42.752674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.150 [2024-12-12 09:21:42.980336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.150 [2024-12-12 09:21:42.980412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 BaseBdev1_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 true 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 [2024-12-12 09:21:43.320640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:09.410 [2024-12-12 09:21:43.320795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.410 [2024-12-12 09:21:43.320836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:09.410 [2024-12-12 09:21:43.320868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.410 [2024-12-12 09:21:43.323340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.410 [2024-12-12 09:21:43.323422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:09.410 BaseBdev1 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 BaseBdev2_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 true 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 [2024-12-12 09:21:43.392483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:09.410 [2024-12-12 09:21:43.392620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.410 [2024-12-12 09:21:43.392656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:09.410 [2024-12-12 09:21:43.392688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.410 [2024-12-12 09:21:43.395091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.410 [2024-12-12 09:21:43.395177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:09.410 BaseBdev2 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 [2024-12-12 09:21:43.404529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.410 [2024-12-12 09:21:43.406671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.410 [2024-12-12 09:21:43.406938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.410 [2024-12-12 09:21:43.407004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.410 [2024-12-12 09:21:43.407281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:09.410 [2024-12-12 09:21:43.407519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.410 [2024-12-12 09:21:43.407562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:09.410 [2024-12-12 09:21:43.407781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.410 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.670 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.670 "name": "raid_bdev1", 00:08:09.670 "uuid": "752b2eed-a56a-403f-95dd-538a4fe10546", 00:08:09.670 "strip_size_kb": 0, 00:08:09.670 "state": "online", 00:08:09.670 "raid_level": "raid1", 00:08:09.670 "superblock": true, 00:08:09.670 "num_base_bdevs": 2, 00:08:09.670 "num_base_bdevs_discovered": 2, 00:08:09.670 "num_base_bdevs_operational": 2, 00:08:09.670 "base_bdevs_list": [ 00:08:09.670 { 00:08:09.670 "name": "BaseBdev1", 00:08:09.670 "uuid": "0518f064-dd62-5403-947c-9871683db9a3", 00:08:09.670 "is_configured": true, 00:08:09.670 "data_offset": 2048, 00:08:09.670 "data_size": 63488 00:08:09.670 }, 00:08:09.670 { 00:08:09.670 "name": "BaseBdev2", 00:08:09.670 "uuid": "3b05fc28-328f-598b-ba92-874796df8f63", 00:08:09.670 "is_configured": true, 00:08:09.670 "data_offset": 2048, 00:08:09.670 "data_size": 63488 00:08:09.670 } 00:08:09.670 ] 00:08:09.670 }' 00:08:09.670 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.670 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.929 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:09.929 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:09.929 [2024-12-12 09:21:43.897152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.868 [2024-12-12 09:21:44.815309] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:10.868 [2024-12-12 09:21:44.815501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.868 [2024-12-12 09:21:44.815755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.868 "name": "raid_bdev1", 00:08:10.868 "uuid": "752b2eed-a56a-403f-95dd-538a4fe10546", 00:08:10.868 "strip_size_kb": 0, 00:08:10.868 "state": "online", 00:08:10.868 "raid_level": "raid1", 00:08:10.868 "superblock": true, 00:08:10.868 "num_base_bdevs": 2, 00:08:10.868 "num_base_bdevs_discovered": 1, 00:08:10.868 "num_base_bdevs_operational": 1, 00:08:10.868 "base_bdevs_list": [ 00:08:10.868 { 00:08:10.868 "name": null, 00:08:10.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.868 "is_configured": false, 00:08:10.868 "data_offset": 0, 00:08:10.868 "data_size": 63488 00:08:10.868 }, 00:08:10.868 { 00:08:10.868 "name": "BaseBdev2", 00:08:10.868 "uuid": "3b05fc28-328f-598b-ba92-874796df8f63", 00:08:10.868 "is_configured": true, 00:08:10.868 "data_offset": 2048, 00:08:10.868 "data_size": 63488 00:08:10.868 } 00:08:10.868 ] 00:08:10.868 }' 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.868 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.436 [2024-12-12 09:21:45.220376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.436 [2024-12-12 09:21:45.220498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.436 [2024-12-12 09:21:45.223056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.436 [2024-12-12 09:21:45.223144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.436 [2024-12-12 09:21:45.223228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.436 [2024-12-12 09:21:45.223282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:11.436 { 00:08:11.436 "results": [ 00:08:11.436 { 00:08:11.436 "job": "raid_bdev1", 00:08:11.436 "core_mask": "0x1", 00:08:11.436 "workload": "randrw", 00:08:11.436 "percentage": 50, 00:08:11.436 "status": "finished", 00:08:11.436 "queue_depth": 1, 00:08:11.436 "io_size": 131072, 00:08:11.436 "runtime": 1.323741, 00:08:11.436 "iops": 17222.402267513055, 00:08:11.436 "mibps": 2152.800283439132, 00:08:11.436 "io_failed": 0, 00:08:11.436 "io_timeout": 0, 00:08:11.436 "avg_latency_us": 55.48202366636773, 00:08:11.436 "min_latency_us": 21.910917030567685, 00:08:11.436 "max_latency_us": 1252.0524017467249 00:08:11.436 } 00:08:11.436 ], 00:08:11.436 "core_count": 1 00:08:11.436 } 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64821 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64821 ']' 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64821 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64821 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.436 killing process with pid 64821 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64821' 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64821 00:08:11.436 [2024-12-12 09:21:45.259315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.436 09:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64821 00:08:11.436 [2024-12-12 09:21:45.397368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aQaayDOkpN 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:12.816 ************************************ 00:08:12.816 END TEST raid_write_error_test 00:08:12.816 ************************************ 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:12.816 00:08:12.816 real 0m4.379s 00:08:12.816 user 0m5.067s 00:08:12.816 sys 0m0.602s 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.816 09:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.816 09:21:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:12.816 09:21:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:12.816 09:21:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:12.816 09:21:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.816 09:21:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.816 09:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.816 ************************************ 00:08:12.816 START TEST raid_state_function_test 00:08:12.816 ************************************ 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:12.816 Process raid pid: 64969 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64969 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64969' 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64969 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64969 ']' 00:08:12.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.816 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.074 [2024-12-12 09:21:46.874581] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:13.074 [2024-12-12 09:21:46.874714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.074 [2024-12-12 09:21:47.052419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.334 [2024-12-12 09:21:47.193779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.593 [2024-12-12 09:21:47.432626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.593 [2024-12-12 09:21:47.432679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.853 [2024-12-12 09:21:47.716192] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.853 [2024-12-12 09:21:47.716331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.853 [2024-12-12 09:21:47.716347] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.853 [2024-12-12 09:21:47.716359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.853 [2024-12-12 09:21:47.716366] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.853 [2024-12-12 09:21:47.716376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.853 "name": "Existed_Raid", 00:08:13.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.853 "strip_size_kb": 64, 00:08:13.853 "state": "configuring", 00:08:13.853 "raid_level": "raid0", 00:08:13.853 "superblock": false, 00:08:13.853 "num_base_bdevs": 3, 00:08:13.853 "num_base_bdevs_discovered": 0, 00:08:13.853 "num_base_bdevs_operational": 3, 00:08:13.853 "base_bdevs_list": [ 00:08:13.853 { 00:08:13.853 "name": "BaseBdev1", 00:08:13.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.853 "is_configured": false, 00:08:13.853 "data_offset": 0, 00:08:13.853 "data_size": 0 00:08:13.853 }, 00:08:13.853 { 00:08:13.853 "name": "BaseBdev2", 00:08:13.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.853 "is_configured": false, 00:08:13.853 "data_offset": 0, 00:08:13.853 "data_size": 0 00:08:13.853 }, 00:08:13.853 { 00:08:13.853 "name": "BaseBdev3", 00:08:13.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.853 "is_configured": false, 00:08:13.853 "data_offset": 0, 00:08:13.853 "data_size": 0 00:08:13.853 } 00:08:13.853 ] 00:08:13.853 }' 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.853 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.113 [2024-12-12 09:21:48.119463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.113 [2024-12-12 09:21:48.119513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.113 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.113 [2024-12-12 09:21:48.131421] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.113 [2024-12-12 09:21:48.131470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.113 [2024-12-12 09:21:48.131480] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.113 [2024-12-12 09:21:48.131490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.113 [2024-12-12 09:21:48.131496] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.113 [2024-12-12 09:21:48.131505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.373 [2024-12-12 09:21:48.185741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.373 BaseBdev1 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.373 [ 00:08:14.373 { 00:08:14.373 "name": "BaseBdev1", 00:08:14.373 "aliases": [ 00:08:14.373 "e8a865f3-132a-4d7d-bc46-cb38d9518bbd" 00:08:14.373 ], 00:08:14.373 "product_name": "Malloc disk", 00:08:14.373 "block_size": 512, 00:08:14.373 "num_blocks": 65536, 00:08:14.373 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:14.373 "assigned_rate_limits": { 00:08:14.373 "rw_ios_per_sec": 0, 00:08:14.373 "rw_mbytes_per_sec": 0, 00:08:14.373 "r_mbytes_per_sec": 0, 00:08:14.373 "w_mbytes_per_sec": 0 00:08:14.373 }, 00:08:14.373 "claimed": true, 00:08:14.373 "claim_type": "exclusive_write", 00:08:14.373 "zoned": false, 00:08:14.373 "supported_io_types": { 00:08:14.373 "read": true, 00:08:14.373 "write": true, 00:08:14.373 "unmap": true, 00:08:14.373 "flush": true, 00:08:14.373 "reset": true, 00:08:14.373 "nvme_admin": false, 00:08:14.373 "nvme_io": false, 00:08:14.373 "nvme_io_md": false, 00:08:14.373 "write_zeroes": true, 00:08:14.373 "zcopy": true, 00:08:14.373 "get_zone_info": false, 00:08:14.373 "zone_management": false, 00:08:14.373 "zone_append": false, 00:08:14.373 "compare": false, 00:08:14.373 "compare_and_write": false, 00:08:14.373 "abort": true, 00:08:14.373 "seek_hole": false, 00:08:14.373 "seek_data": false, 00:08:14.373 "copy": true, 00:08:14.373 "nvme_iov_md": false 00:08:14.373 }, 00:08:14.373 "memory_domains": [ 00:08:14.373 { 00:08:14.373 "dma_device_id": "system", 00:08:14.373 "dma_device_type": 1 00:08:14.373 }, 00:08:14.373 { 00:08:14.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.373 "dma_device_type": 2 00:08:14.373 } 00:08:14.373 ], 00:08:14.373 "driver_specific": {} 00:08:14.373 } 00:08:14.373 ] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.373 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.374 "name": "Existed_Raid", 00:08:14.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.374 "strip_size_kb": 64, 00:08:14.374 "state": "configuring", 00:08:14.374 "raid_level": "raid0", 00:08:14.374 "superblock": false, 00:08:14.374 "num_base_bdevs": 3, 00:08:14.374 "num_base_bdevs_discovered": 1, 00:08:14.374 "num_base_bdevs_operational": 3, 00:08:14.374 "base_bdevs_list": [ 00:08:14.374 { 00:08:14.374 "name": "BaseBdev1", 00:08:14.374 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:14.374 "is_configured": true, 00:08:14.374 "data_offset": 0, 00:08:14.374 "data_size": 65536 00:08:14.374 }, 00:08:14.374 { 00:08:14.374 "name": "BaseBdev2", 00:08:14.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.374 "is_configured": false, 00:08:14.374 "data_offset": 0, 00:08:14.374 "data_size": 0 00:08:14.374 }, 00:08:14.374 { 00:08:14.374 "name": "BaseBdev3", 00:08:14.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.374 "is_configured": false, 00:08:14.374 "data_offset": 0, 00:08:14.374 "data_size": 0 00:08:14.374 } 00:08:14.374 ] 00:08:14.374 }' 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.374 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.952 [2024-12-12 09:21:48.665029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.952 [2024-12-12 09:21:48.665118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.952 [2024-12-12 09:21:48.673070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.952 [2024-12-12 09:21:48.675283] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.952 [2024-12-12 09:21:48.675334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.952 [2024-12-12 09:21:48.675345] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.952 [2024-12-12 09:21:48.675354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.952 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.953 "name": "Existed_Raid", 00:08:14.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.953 "strip_size_kb": 64, 00:08:14.953 "state": "configuring", 00:08:14.953 "raid_level": "raid0", 00:08:14.953 "superblock": false, 00:08:14.953 "num_base_bdevs": 3, 00:08:14.953 "num_base_bdevs_discovered": 1, 00:08:14.953 "num_base_bdevs_operational": 3, 00:08:14.953 "base_bdevs_list": [ 00:08:14.953 { 00:08:14.953 "name": "BaseBdev1", 00:08:14.953 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:14.953 "is_configured": true, 00:08:14.953 "data_offset": 0, 00:08:14.953 "data_size": 65536 00:08:14.953 }, 00:08:14.953 { 00:08:14.953 "name": "BaseBdev2", 00:08:14.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.953 "is_configured": false, 00:08:14.953 "data_offset": 0, 00:08:14.953 "data_size": 0 00:08:14.953 }, 00:08:14.953 { 00:08:14.953 "name": "BaseBdev3", 00:08:14.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.953 "is_configured": false, 00:08:14.953 "data_offset": 0, 00:08:14.953 "data_size": 0 00:08:14.953 } 00:08:14.953 ] 00:08:14.953 }' 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.953 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.213 [2024-12-12 09:21:49.136273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.213 BaseBdev2 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.213 [ 00:08:15.213 { 00:08:15.213 "name": "BaseBdev2", 00:08:15.213 "aliases": [ 00:08:15.213 "6ba093fc-deec-4837-9ae2-42f2d898fa15" 00:08:15.213 ], 00:08:15.213 "product_name": "Malloc disk", 00:08:15.213 "block_size": 512, 00:08:15.213 "num_blocks": 65536, 00:08:15.213 "uuid": "6ba093fc-deec-4837-9ae2-42f2d898fa15", 00:08:15.213 "assigned_rate_limits": { 00:08:15.213 "rw_ios_per_sec": 0, 00:08:15.213 "rw_mbytes_per_sec": 0, 00:08:15.213 "r_mbytes_per_sec": 0, 00:08:15.213 "w_mbytes_per_sec": 0 00:08:15.213 }, 00:08:15.213 "claimed": true, 00:08:15.213 "claim_type": "exclusive_write", 00:08:15.213 "zoned": false, 00:08:15.213 "supported_io_types": { 00:08:15.213 "read": true, 00:08:15.213 "write": true, 00:08:15.213 "unmap": true, 00:08:15.213 "flush": true, 00:08:15.213 "reset": true, 00:08:15.213 "nvme_admin": false, 00:08:15.213 "nvme_io": false, 00:08:15.213 "nvme_io_md": false, 00:08:15.213 "write_zeroes": true, 00:08:15.213 "zcopy": true, 00:08:15.213 "get_zone_info": false, 00:08:15.213 "zone_management": false, 00:08:15.213 "zone_append": false, 00:08:15.213 "compare": false, 00:08:15.213 "compare_and_write": false, 00:08:15.213 "abort": true, 00:08:15.213 "seek_hole": false, 00:08:15.213 "seek_data": false, 00:08:15.213 "copy": true, 00:08:15.213 "nvme_iov_md": false 00:08:15.213 }, 00:08:15.213 "memory_domains": [ 00:08:15.213 { 00:08:15.213 "dma_device_id": "system", 00:08:15.213 "dma_device_type": 1 00:08:15.213 }, 00:08:15.213 { 00:08:15.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.213 "dma_device_type": 2 00:08:15.213 } 00:08:15.213 ], 00:08:15.213 "driver_specific": {} 00:08:15.213 } 00:08:15.213 ] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.213 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.213 "name": "Existed_Raid", 00:08:15.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.213 "strip_size_kb": 64, 00:08:15.213 "state": "configuring", 00:08:15.213 "raid_level": "raid0", 00:08:15.213 "superblock": false, 00:08:15.213 "num_base_bdevs": 3, 00:08:15.213 "num_base_bdevs_discovered": 2, 00:08:15.213 "num_base_bdevs_operational": 3, 00:08:15.213 "base_bdevs_list": [ 00:08:15.213 { 00:08:15.213 "name": "BaseBdev1", 00:08:15.213 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:15.213 "is_configured": true, 00:08:15.213 "data_offset": 0, 00:08:15.213 "data_size": 65536 00:08:15.213 }, 00:08:15.213 { 00:08:15.213 "name": "BaseBdev2", 00:08:15.213 "uuid": "6ba093fc-deec-4837-9ae2-42f2d898fa15", 00:08:15.213 "is_configured": true, 00:08:15.213 "data_offset": 0, 00:08:15.213 "data_size": 65536 00:08:15.213 }, 00:08:15.213 { 00:08:15.213 "name": "BaseBdev3", 00:08:15.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.213 "is_configured": false, 00:08:15.213 "data_offset": 0, 00:08:15.213 "data_size": 0 00:08:15.214 } 00:08:15.214 ] 00:08:15.214 }' 00:08:15.214 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.214 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.783 [2024-12-12 09:21:49.638797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.783 [2024-12-12 09:21:49.638928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:15.783 [2024-12-12 09:21:49.638978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:15.783 [2024-12-12 09:21:49.639344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.783 [2024-12-12 09:21:49.639594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:15.783 [2024-12-12 09:21:49.639637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:15.783 [2024-12-12 09:21:49.639981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.783 BaseBdev3 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.783 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.784 [ 00:08:15.784 { 00:08:15.784 "name": "BaseBdev3", 00:08:15.784 "aliases": [ 00:08:15.784 "ff01fa0a-3aaa-4d1b-afc1-f8909640794c" 00:08:15.784 ], 00:08:15.784 "product_name": "Malloc disk", 00:08:15.784 "block_size": 512, 00:08:15.784 "num_blocks": 65536, 00:08:15.784 "uuid": "ff01fa0a-3aaa-4d1b-afc1-f8909640794c", 00:08:15.784 "assigned_rate_limits": { 00:08:15.784 "rw_ios_per_sec": 0, 00:08:15.784 "rw_mbytes_per_sec": 0, 00:08:15.784 "r_mbytes_per_sec": 0, 00:08:15.784 "w_mbytes_per_sec": 0 00:08:15.784 }, 00:08:15.784 "claimed": true, 00:08:15.784 "claim_type": "exclusive_write", 00:08:15.784 "zoned": false, 00:08:15.784 "supported_io_types": { 00:08:15.784 "read": true, 00:08:15.784 "write": true, 00:08:15.784 "unmap": true, 00:08:15.784 "flush": true, 00:08:15.784 "reset": true, 00:08:15.784 "nvme_admin": false, 00:08:15.784 "nvme_io": false, 00:08:15.784 "nvme_io_md": false, 00:08:15.784 "write_zeroes": true, 00:08:15.784 "zcopy": true, 00:08:15.784 "get_zone_info": false, 00:08:15.784 "zone_management": false, 00:08:15.784 "zone_append": false, 00:08:15.784 "compare": false, 00:08:15.784 "compare_and_write": false, 00:08:15.784 "abort": true, 00:08:15.784 "seek_hole": false, 00:08:15.784 "seek_data": false, 00:08:15.784 "copy": true, 00:08:15.784 "nvme_iov_md": false 00:08:15.784 }, 00:08:15.784 "memory_domains": [ 00:08:15.784 { 00:08:15.784 "dma_device_id": "system", 00:08:15.784 "dma_device_type": 1 00:08:15.784 }, 00:08:15.784 { 00:08:15.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.784 "dma_device_type": 2 00:08:15.784 } 00:08:15.784 ], 00:08:15.784 "driver_specific": {} 00:08:15.784 } 00:08:15.784 ] 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.784 "name": "Existed_Raid", 00:08:15.784 "uuid": "e1f34110-a79f-4e45-885f-da6d09752885", 00:08:15.784 "strip_size_kb": 64, 00:08:15.784 "state": "online", 00:08:15.784 "raid_level": "raid0", 00:08:15.784 "superblock": false, 00:08:15.784 "num_base_bdevs": 3, 00:08:15.784 "num_base_bdevs_discovered": 3, 00:08:15.784 "num_base_bdevs_operational": 3, 00:08:15.784 "base_bdevs_list": [ 00:08:15.784 { 00:08:15.784 "name": "BaseBdev1", 00:08:15.784 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:15.784 "is_configured": true, 00:08:15.784 "data_offset": 0, 00:08:15.784 "data_size": 65536 00:08:15.784 }, 00:08:15.784 { 00:08:15.784 "name": "BaseBdev2", 00:08:15.784 "uuid": "6ba093fc-deec-4837-9ae2-42f2d898fa15", 00:08:15.784 "is_configured": true, 00:08:15.784 "data_offset": 0, 00:08:15.784 "data_size": 65536 00:08:15.784 }, 00:08:15.784 { 00:08:15.784 "name": "BaseBdev3", 00:08:15.784 "uuid": "ff01fa0a-3aaa-4d1b-afc1-f8909640794c", 00:08:15.784 "is_configured": true, 00:08:15.784 "data_offset": 0, 00:08:15.784 "data_size": 65536 00:08:15.784 } 00:08:15.784 ] 00:08:15.784 }' 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.784 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.353 [2024-12-12 09:21:50.110416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.353 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.353 "name": "Existed_Raid", 00:08:16.353 "aliases": [ 00:08:16.353 "e1f34110-a79f-4e45-885f-da6d09752885" 00:08:16.353 ], 00:08:16.353 "product_name": "Raid Volume", 00:08:16.353 "block_size": 512, 00:08:16.353 "num_blocks": 196608, 00:08:16.353 "uuid": "e1f34110-a79f-4e45-885f-da6d09752885", 00:08:16.353 "assigned_rate_limits": { 00:08:16.353 "rw_ios_per_sec": 0, 00:08:16.353 "rw_mbytes_per_sec": 0, 00:08:16.353 "r_mbytes_per_sec": 0, 00:08:16.353 "w_mbytes_per_sec": 0 00:08:16.353 }, 00:08:16.353 "claimed": false, 00:08:16.353 "zoned": false, 00:08:16.353 "supported_io_types": { 00:08:16.353 "read": true, 00:08:16.353 "write": true, 00:08:16.353 "unmap": true, 00:08:16.353 "flush": true, 00:08:16.353 "reset": true, 00:08:16.353 "nvme_admin": false, 00:08:16.353 "nvme_io": false, 00:08:16.353 "nvme_io_md": false, 00:08:16.353 "write_zeroes": true, 00:08:16.353 "zcopy": false, 00:08:16.353 "get_zone_info": false, 00:08:16.353 "zone_management": false, 00:08:16.353 "zone_append": false, 00:08:16.353 "compare": false, 00:08:16.353 "compare_and_write": false, 00:08:16.353 "abort": false, 00:08:16.353 "seek_hole": false, 00:08:16.353 "seek_data": false, 00:08:16.353 "copy": false, 00:08:16.353 "nvme_iov_md": false 00:08:16.353 }, 00:08:16.353 "memory_domains": [ 00:08:16.353 { 00:08:16.353 "dma_device_id": "system", 00:08:16.353 "dma_device_type": 1 00:08:16.353 }, 00:08:16.353 { 00:08:16.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.353 "dma_device_type": 2 00:08:16.353 }, 00:08:16.353 { 00:08:16.353 "dma_device_id": "system", 00:08:16.353 "dma_device_type": 1 00:08:16.353 }, 00:08:16.354 { 00:08:16.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.354 "dma_device_type": 2 00:08:16.354 }, 00:08:16.354 { 00:08:16.354 "dma_device_id": "system", 00:08:16.354 "dma_device_type": 1 00:08:16.354 }, 00:08:16.354 { 00:08:16.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.354 "dma_device_type": 2 00:08:16.354 } 00:08:16.354 ], 00:08:16.354 "driver_specific": { 00:08:16.354 "raid": { 00:08:16.354 "uuid": "e1f34110-a79f-4e45-885f-da6d09752885", 00:08:16.354 "strip_size_kb": 64, 00:08:16.354 "state": "online", 00:08:16.354 "raid_level": "raid0", 00:08:16.354 "superblock": false, 00:08:16.354 "num_base_bdevs": 3, 00:08:16.354 "num_base_bdevs_discovered": 3, 00:08:16.354 "num_base_bdevs_operational": 3, 00:08:16.354 "base_bdevs_list": [ 00:08:16.354 { 00:08:16.354 "name": "BaseBdev1", 00:08:16.354 "uuid": "e8a865f3-132a-4d7d-bc46-cb38d9518bbd", 00:08:16.354 "is_configured": true, 00:08:16.354 "data_offset": 0, 00:08:16.354 "data_size": 65536 00:08:16.354 }, 00:08:16.354 { 00:08:16.354 "name": "BaseBdev2", 00:08:16.354 "uuid": "6ba093fc-deec-4837-9ae2-42f2d898fa15", 00:08:16.354 "is_configured": true, 00:08:16.354 "data_offset": 0, 00:08:16.354 "data_size": 65536 00:08:16.354 }, 00:08:16.354 { 00:08:16.354 "name": "BaseBdev3", 00:08:16.354 "uuid": "ff01fa0a-3aaa-4d1b-afc1-f8909640794c", 00:08:16.354 "is_configured": true, 00:08:16.354 "data_offset": 0, 00:08:16.354 "data_size": 65536 00:08:16.354 } 00:08:16.354 ] 00:08:16.354 } 00:08:16.354 } 00:08:16.354 }' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.354 BaseBdev2 00:08:16.354 BaseBdev3' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.354 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.354 [2024-12-12 09:21:50.353729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.354 [2024-12-12 09:21:50.353775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.354 [2024-12-12 09:21:50.353840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.614 "name": "Existed_Raid", 00:08:16.614 "uuid": "e1f34110-a79f-4e45-885f-da6d09752885", 00:08:16.614 "strip_size_kb": 64, 00:08:16.614 "state": "offline", 00:08:16.614 "raid_level": "raid0", 00:08:16.614 "superblock": false, 00:08:16.614 "num_base_bdevs": 3, 00:08:16.614 "num_base_bdevs_discovered": 2, 00:08:16.614 "num_base_bdevs_operational": 2, 00:08:16.614 "base_bdevs_list": [ 00:08:16.614 { 00:08:16.614 "name": null, 00:08:16.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.614 "is_configured": false, 00:08:16.614 "data_offset": 0, 00:08:16.614 "data_size": 65536 00:08:16.614 }, 00:08:16.614 { 00:08:16.614 "name": "BaseBdev2", 00:08:16.614 "uuid": "6ba093fc-deec-4837-9ae2-42f2d898fa15", 00:08:16.614 "is_configured": true, 00:08:16.614 "data_offset": 0, 00:08:16.614 "data_size": 65536 00:08:16.614 }, 00:08:16.614 { 00:08:16.614 "name": "BaseBdev3", 00:08:16.614 "uuid": "ff01fa0a-3aaa-4d1b-afc1-f8909640794c", 00:08:16.614 "is_configured": true, 00:08:16.614 "data_offset": 0, 00:08:16.614 "data_size": 65536 00:08:16.614 } 00:08:16.614 ] 00:08:16.614 }' 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.614 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.183 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.183 [2024-12-12 09:21:50.976903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.183 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.183 [2024-12-12 09:21:51.131309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.183 [2024-12-12 09:21:51.131376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 BaseBdev2 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 [ 00:08:17.444 { 00:08:17.444 "name": "BaseBdev2", 00:08:17.444 "aliases": [ 00:08:17.444 "3f5d95b4-7d17-4778-acb3-82b64bf04b61" 00:08:17.444 ], 00:08:17.444 "product_name": "Malloc disk", 00:08:17.444 "block_size": 512, 00:08:17.444 "num_blocks": 65536, 00:08:17.444 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:17.444 "assigned_rate_limits": { 00:08:17.444 "rw_ios_per_sec": 0, 00:08:17.444 "rw_mbytes_per_sec": 0, 00:08:17.444 "r_mbytes_per_sec": 0, 00:08:17.444 "w_mbytes_per_sec": 0 00:08:17.444 }, 00:08:17.444 "claimed": false, 00:08:17.444 "zoned": false, 00:08:17.444 "supported_io_types": { 00:08:17.444 "read": true, 00:08:17.444 "write": true, 00:08:17.444 "unmap": true, 00:08:17.444 "flush": true, 00:08:17.444 "reset": true, 00:08:17.444 "nvme_admin": false, 00:08:17.444 "nvme_io": false, 00:08:17.444 "nvme_io_md": false, 00:08:17.444 "write_zeroes": true, 00:08:17.444 "zcopy": true, 00:08:17.444 "get_zone_info": false, 00:08:17.444 "zone_management": false, 00:08:17.444 "zone_append": false, 00:08:17.444 "compare": false, 00:08:17.444 "compare_and_write": false, 00:08:17.444 "abort": true, 00:08:17.444 "seek_hole": false, 00:08:17.444 "seek_data": false, 00:08:17.444 "copy": true, 00:08:17.444 "nvme_iov_md": false 00:08:17.444 }, 00:08:17.444 "memory_domains": [ 00:08:17.444 { 00:08:17.444 "dma_device_id": "system", 00:08:17.444 "dma_device_type": 1 00:08:17.444 }, 00:08:17.444 { 00:08:17.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.444 "dma_device_type": 2 00:08:17.444 } 00:08:17.444 ], 00:08:17.444 "driver_specific": {} 00:08:17.444 } 00:08:17.444 ] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 BaseBdev3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 [ 00:08:17.444 { 00:08:17.444 "name": "BaseBdev3", 00:08:17.444 "aliases": [ 00:08:17.444 "3fd51588-24d5-475d-a023-723ea8e1bbbe" 00:08:17.444 ], 00:08:17.444 "product_name": "Malloc disk", 00:08:17.444 "block_size": 512, 00:08:17.444 "num_blocks": 65536, 00:08:17.444 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:17.444 "assigned_rate_limits": { 00:08:17.444 "rw_ios_per_sec": 0, 00:08:17.444 "rw_mbytes_per_sec": 0, 00:08:17.444 "r_mbytes_per_sec": 0, 00:08:17.444 "w_mbytes_per_sec": 0 00:08:17.444 }, 00:08:17.444 "claimed": false, 00:08:17.444 "zoned": false, 00:08:17.444 "supported_io_types": { 00:08:17.444 "read": true, 00:08:17.444 "write": true, 00:08:17.444 "unmap": true, 00:08:17.444 "flush": true, 00:08:17.444 "reset": true, 00:08:17.444 "nvme_admin": false, 00:08:17.444 "nvme_io": false, 00:08:17.444 "nvme_io_md": false, 00:08:17.444 "write_zeroes": true, 00:08:17.444 "zcopy": true, 00:08:17.444 "get_zone_info": false, 00:08:17.444 "zone_management": false, 00:08:17.444 "zone_append": false, 00:08:17.444 "compare": false, 00:08:17.444 "compare_and_write": false, 00:08:17.444 "abort": true, 00:08:17.444 "seek_hole": false, 00:08:17.444 "seek_data": false, 00:08:17.444 "copy": true, 00:08:17.444 "nvme_iov_md": false 00:08:17.444 }, 00:08:17.444 "memory_domains": [ 00:08:17.444 { 00:08:17.444 "dma_device_id": "system", 00:08:17.444 "dma_device_type": 1 00:08:17.444 }, 00:08:17.444 { 00:08:17.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.444 "dma_device_type": 2 00:08:17.444 } 00:08:17.444 ], 00:08:17.444 "driver_specific": {} 00:08:17.444 } 00:08:17.444 ] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 [2024-12-12 09:21:51.452472] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.444 [2024-12-12 09:21:51.452606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.444 [2024-12-12 09:21:51.452652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.444 [2024-12-12 09:21:51.454808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.444 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.445 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.704 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.704 "name": "Existed_Raid", 00:08:17.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.704 "strip_size_kb": 64, 00:08:17.704 "state": "configuring", 00:08:17.704 "raid_level": "raid0", 00:08:17.704 "superblock": false, 00:08:17.704 "num_base_bdevs": 3, 00:08:17.704 "num_base_bdevs_discovered": 2, 00:08:17.704 "num_base_bdevs_operational": 3, 00:08:17.704 "base_bdevs_list": [ 00:08:17.704 { 00:08:17.704 "name": "BaseBdev1", 00:08:17.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.704 "is_configured": false, 00:08:17.704 "data_offset": 0, 00:08:17.704 "data_size": 0 00:08:17.704 }, 00:08:17.704 { 00:08:17.704 "name": "BaseBdev2", 00:08:17.704 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:17.704 "is_configured": true, 00:08:17.704 "data_offset": 0, 00:08:17.704 "data_size": 65536 00:08:17.704 }, 00:08:17.704 { 00:08:17.704 "name": "BaseBdev3", 00:08:17.704 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:17.704 "is_configured": true, 00:08:17.704 "data_offset": 0, 00:08:17.704 "data_size": 65536 00:08:17.704 } 00:08:17.704 ] 00:08:17.704 }' 00:08:17.704 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.704 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.964 [2024-12-12 09:21:51.939793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.964 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.224 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.224 "name": "Existed_Raid", 00:08:18.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.224 "strip_size_kb": 64, 00:08:18.224 "state": "configuring", 00:08:18.224 "raid_level": "raid0", 00:08:18.224 "superblock": false, 00:08:18.224 "num_base_bdevs": 3, 00:08:18.224 "num_base_bdevs_discovered": 1, 00:08:18.224 "num_base_bdevs_operational": 3, 00:08:18.224 "base_bdevs_list": [ 00:08:18.224 { 00:08:18.224 "name": "BaseBdev1", 00:08:18.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.224 "is_configured": false, 00:08:18.224 "data_offset": 0, 00:08:18.224 "data_size": 0 00:08:18.224 }, 00:08:18.224 { 00:08:18.224 "name": null, 00:08:18.224 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:18.224 "is_configured": false, 00:08:18.224 "data_offset": 0, 00:08:18.224 "data_size": 65536 00:08:18.224 }, 00:08:18.224 { 00:08:18.224 "name": "BaseBdev3", 00:08:18.224 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:18.224 "is_configured": true, 00:08:18.224 "data_offset": 0, 00:08:18.224 "data_size": 65536 00:08:18.224 } 00:08:18.224 ] 00:08:18.224 }' 00:08:18.224 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.224 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.483 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.744 [2024-12-12 09:21:52.512898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.744 BaseBdev1 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.744 [ 00:08:18.744 { 00:08:18.744 "name": "BaseBdev1", 00:08:18.744 "aliases": [ 00:08:18.744 "6f405dcf-08ee-4514-be00-87501a8c6acf" 00:08:18.744 ], 00:08:18.744 "product_name": "Malloc disk", 00:08:18.744 "block_size": 512, 00:08:18.744 "num_blocks": 65536, 00:08:18.744 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:18.744 "assigned_rate_limits": { 00:08:18.744 "rw_ios_per_sec": 0, 00:08:18.744 "rw_mbytes_per_sec": 0, 00:08:18.744 "r_mbytes_per_sec": 0, 00:08:18.744 "w_mbytes_per_sec": 0 00:08:18.744 }, 00:08:18.744 "claimed": true, 00:08:18.744 "claim_type": "exclusive_write", 00:08:18.744 "zoned": false, 00:08:18.744 "supported_io_types": { 00:08:18.744 "read": true, 00:08:18.744 "write": true, 00:08:18.744 "unmap": true, 00:08:18.744 "flush": true, 00:08:18.744 "reset": true, 00:08:18.744 "nvme_admin": false, 00:08:18.744 "nvme_io": false, 00:08:18.744 "nvme_io_md": false, 00:08:18.744 "write_zeroes": true, 00:08:18.744 "zcopy": true, 00:08:18.744 "get_zone_info": false, 00:08:18.744 "zone_management": false, 00:08:18.744 "zone_append": false, 00:08:18.744 "compare": false, 00:08:18.744 "compare_and_write": false, 00:08:18.744 "abort": true, 00:08:18.744 "seek_hole": false, 00:08:18.744 "seek_data": false, 00:08:18.744 "copy": true, 00:08:18.744 "nvme_iov_md": false 00:08:18.744 }, 00:08:18.744 "memory_domains": [ 00:08:18.744 { 00:08:18.744 "dma_device_id": "system", 00:08:18.744 "dma_device_type": 1 00:08:18.744 }, 00:08:18.744 { 00:08:18.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.744 "dma_device_type": 2 00:08:18.744 } 00:08:18.744 ], 00:08:18.744 "driver_specific": {} 00:08:18.744 } 00:08:18.744 ] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.744 "name": "Existed_Raid", 00:08:18.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.744 "strip_size_kb": 64, 00:08:18.744 "state": "configuring", 00:08:18.744 "raid_level": "raid0", 00:08:18.744 "superblock": false, 00:08:18.744 "num_base_bdevs": 3, 00:08:18.744 "num_base_bdevs_discovered": 2, 00:08:18.744 "num_base_bdevs_operational": 3, 00:08:18.744 "base_bdevs_list": [ 00:08:18.744 { 00:08:18.744 "name": "BaseBdev1", 00:08:18.744 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:18.744 "is_configured": true, 00:08:18.744 "data_offset": 0, 00:08:18.744 "data_size": 65536 00:08:18.744 }, 00:08:18.744 { 00:08:18.744 "name": null, 00:08:18.744 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:18.744 "is_configured": false, 00:08:18.744 "data_offset": 0, 00:08:18.744 "data_size": 65536 00:08:18.744 }, 00:08:18.744 { 00:08:18.744 "name": "BaseBdev3", 00:08:18.744 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:18.744 "is_configured": true, 00:08:18.744 "data_offset": 0, 00:08:18.744 "data_size": 65536 00:08:18.744 } 00:08:18.744 ] 00:08:18.744 }' 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.744 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 [2024-12-12 09:21:52.956178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.004 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.005 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.005 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.005 "name": "Existed_Raid", 00:08:19.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.005 "strip_size_kb": 64, 00:08:19.005 "state": "configuring", 00:08:19.005 "raid_level": "raid0", 00:08:19.005 "superblock": false, 00:08:19.005 "num_base_bdevs": 3, 00:08:19.005 "num_base_bdevs_discovered": 1, 00:08:19.005 "num_base_bdevs_operational": 3, 00:08:19.005 "base_bdevs_list": [ 00:08:19.005 { 00:08:19.005 "name": "BaseBdev1", 00:08:19.005 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:19.005 "is_configured": true, 00:08:19.005 "data_offset": 0, 00:08:19.005 "data_size": 65536 00:08:19.005 }, 00:08:19.005 { 00:08:19.005 "name": null, 00:08:19.005 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:19.005 "is_configured": false, 00:08:19.005 "data_offset": 0, 00:08:19.005 "data_size": 65536 00:08:19.005 }, 00:08:19.005 { 00:08:19.005 "name": null, 00:08:19.005 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:19.005 "is_configured": false, 00:08:19.005 "data_offset": 0, 00:08:19.005 "data_size": 65536 00:08:19.005 } 00:08:19.005 ] 00:08:19.005 }' 00:08:19.005 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.005 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.574 [2024-12-12 09:21:53.395824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.574 "name": "Existed_Raid", 00:08:19.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.574 "strip_size_kb": 64, 00:08:19.574 "state": "configuring", 00:08:19.574 "raid_level": "raid0", 00:08:19.574 "superblock": false, 00:08:19.574 "num_base_bdevs": 3, 00:08:19.574 "num_base_bdevs_discovered": 2, 00:08:19.574 "num_base_bdevs_operational": 3, 00:08:19.574 "base_bdevs_list": [ 00:08:19.574 { 00:08:19.574 "name": "BaseBdev1", 00:08:19.574 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:19.574 "is_configured": true, 00:08:19.574 "data_offset": 0, 00:08:19.574 "data_size": 65536 00:08:19.574 }, 00:08:19.574 { 00:08:19.574 "name": null, 00:08:19.574 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:19.574 "is_configured": false, 00:08:19.574 "data_offset": 0, 00:08:19.574 "data_size": 65536 00:08:19.574 }, 00:08:19.574 { 00:08:19.574 "name": "BaseBdev3", 00:08:19.574 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:19.574 "is_configured": true, 00:08:19.574 "data_offset": 0, 00:08:19.574 "data_size": 65536 00:08:19.574 } 00:08:19.574 ] 00:08:19.574 }' 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.574 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.833 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.833 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.833 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.833 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.833 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.091 [2024-12-12 09:21:53.894941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.091 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.092 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.092 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.092 "name": "Existed_Raid", 00:08:20.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.092 "strip_size_kb": 64, 00:08:20.092 "state": "configuring", 00:08:20.092 "raid_level": "raid0", 00:08:20.092 "superblock": false, 00:08:20.092 "num_base_bdevs": 3, 00:08:20.092 "num_base_bdevs_discovered": 1, 00:08:20.092 "num_base_bdevs_operational": 3, 00:08:20.092 "base_bdevs_list": [ 00:08:20.092 { 00:08:20.092 "name": null, 00:08:20.092 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:20.092 "is_configured": false, 00:08:20.092 "data_offset": 0, 00:08:20.092 "data_size": 65536 00:08:20.092 }, 00:08:20.092 { 00:08:20.092 "name": null, 00:08:20.092 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:20.092 "is_configured": false, 00:08:20.092 "data_offset": 0, 00:08:20.092 "data_size": 65536 00:08:20.092 }, 00:08:20.092 { 00:08:20.092 "name": "BaseBdev3", 00:08:20.092 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:20.092 "is_configured": true, 00:08:20.092 "data_offset": 0, 00:08:20.092 "data_size": 65536 00:08:20.092 } 00:08:20.092 ] 00:08:20.092 }' 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.092 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.660 [2024-12-12 09:21:54.437177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.660 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.660 "name": "Existed_Raid", 00:08:20.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.660 "strip_size_kb": 64, 00:08:20.660 "state": "configuring", 00:08:20.660 "raid_level": "raid0", 00:08:20.660 "superblock": false, 00:08:20.660 "num_base_bdevs": 3, 00:08:20.660 "num_base_bdevs_discovered": 2, 00:08:20.660 "num_base_bdevs_operational": 3, 00:08:20.660 "base_bdevs_list": [ 00:08:20.660 { 00:08:20.660 "name": null, 00:08:20.660 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:20.660 "is_configured": false, 00:08:20.660 "data_offset": 0, 00:08:20.660 "data_size": 65536 00:08:20.660 }, 00:08:20.660 { 00:08:20.660 "name": "BaseBdev2", 00:08:20.660 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:20.660 "is_configured": true, 00:08:20.660 "data_offset": 0, 00:08:20.660 "data_size": 65536 00:08:20.660 }, 00:08:20.660 { 00:08:20.660 "name": "BaseBdev3", 00:08:20.660 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:20.660 "is_configured": true, 00:08:20.660 "data_offset": 0, 00:08:20.660 "data_size": 65536 00:08:20.660 } 00:08:20.660 ] 00:08:20.660 }' 00:08:20.661 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.661 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.920 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.180 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.180 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6f405dcf-08ee-4514-be00-87501a8c6acf 00:08:21.180 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.180 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.180 [2024-12-12 09:21:55.026365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:21.180 [2024-12-12 09:21:55.026415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.180 [2024-12-12 09:21:55.026426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:21.180 [2024-12-12 09:21:55.026706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:21.180 [2024-12-12 09:21:55.026869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.180 [2024-12-12 09:21:55.026879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:21.180 [2024-12-12 09:21:55.027185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.180 NewBaseBdev 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.180 [ 00:08:21.180 { 00:08:21.180 "name": "NewBaseBdev", 00:08:21.180 "aliases": [ 00:08:21.180 "6f405dcf-08ee-4514-be00-87501a8c6acf" 00:08:21.180 ], 00:08:21.180 "product_name": "Malloc disk", 00:08:21.180 "block_size": 512, 00:08:21.180 "num_blocks": 65536, 00:08:21.180 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:21.180 "assigned_rate_limits": { 00:08:21.180 "rw_ios_per_sec": 0, 00:08:21.180 "rw_mbytes_per_sec": 0, 00:08:21.180 "r_mbytes_per_sec": 0, 00:08:21.180 "w_mbytes_per_sec": 0 00:08:21.180 }, 00:08:21.180 "claimed": true, 00:08:21.180 "claim_type": "exclusive_write", 00:08:21.180 "zoned": false, 00:08:21.180 "supported_io_types": { 00:08:21.180 "read": true, 00:08:21.180 "write": true, 00:08:21.180 "unmap": true, 00:08:21.180 "flush": true, 00:08:21.180 "reset": true, 00:08:21.180 "nvme_admin": false, 00:08:21.180 "nvme_io": false, 00:08:21.180 "nvme_io_md": false, 00:08:21.180 "write_zeroes": true, 00:08:21.180 "zcopy": true, 00:08:21.180 "get_zone_info": false, 00:08:21.180 "zone_management": false, 00:08:21.180 "zone_append": false, 00:08:21.180 "compare": false, 00:08:21.180 "compare_and_write": false, 00:08:21.180 "abort": true, 00:08:21.180 "seek_hole": false, 00:08:21.180 "seek_data": false, 00:08:21.180 "copy": true, 00:08:21.180 "nvme_iov_md": false 00:08:21.180 }, 00:08:21.180 "memory_domains": [ 00:08:21.180 { 00:08:21.180 "dma_device_id": "system", 00:08:21.180 "dma_device_type": 1 00:08:21.180 }, 00:08:21.180 { 00:08:21.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.180 "dma_device_type": 2 00:08:21.180 } 00:08:21.180 ], 00:08:21.180 "driver_specific": {} 00:08:21.180 } 00:08:21.180 ] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.180 "name": "Existed_Raid", 00:08:21.180 "uuid": "3afca109-90cc-4d2d-a9aa-421e7bc748d2", 00:08:21.180 "strip_size_kb": 64, 00:08:21.180 "state": "online", 00:08:21.180 "raid_level": "raid0", 00:08:21.180 "superblock": false, 00:08:21.180 "num_base_bdevs": 3, 00:08:21.180 "num_base_bdevs_discovered": 3, 00:08:21.180 "num_base_bdevs_operational": 3, 00:08:21.180 "base_bdevs_list": [ 00:08:21.180 { 00:08:21.180 "name": "NewBaseBdev", 00:08:21.180 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:21.180 "is_configured": true, 00:08:21.180 "data_offset": 0, 00:08:21.180 "data_size": 65536 00:08:21.180 }, 00:08:21.180 { 00:08:21.180 "name": "BaseBdev2", 00:08:21.180 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:21.180 "is_configured": true, 00:08:21.180 "data_offset": 0, 00:08:21.180 "data_size": 65536 00:08:21.180 }, 00:08:21.180 { 00:08:21.180 "name": "BaseBdev3", 00:08:21.180 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:21.180 "is_configured": true, 00:08:21.180 "data_offset": 0, 00:08:21.180 "data_size": 65536 00:08:21.180 } 00:08:21.180 ] 00:08:21.180 }' 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.180 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.749 [2024-12-12 09:21:55.486016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.749 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.749 "name": "Existed_Raid", 00:08:21.749 "aliases": [ 00:08:21.749 "3afca109-90cc-4d2d-a9aa-421e7bc748d2" 00:08:21.749 ], 00:08:21.749 "product_name": "Raid Volume", 00:08:21.749 "block_size": 512, 00:08:21.749 "num_blocks": 196608, 00:08:21.749 "uuid": "3afca109-90cc-4d2d-a9aa-421e7bc748d2", 00:08:21.749 "assigned_rate_limits": { 00:08:21.749 "rw_ios_per_sec": 0, 00:08:21.749 "rw_mbytes_per_sec": 0, 00:08:21.749 "r_mbytes_per_sec": 0, 00:08:21.749 "w_mbytes_per_sec": 0 00:08:21.749 }, 00:08:21.749 "claimed": false, 00:08:21.749 "zoned": false, 00:08:21.749 "supported_io_types": { 00:08:21.749 "read": true, 00:08:21.749 "write": true, 00:08:21.749 "unmap": true, 00:08:21.749 "flush": true, 00:08:21.749 "reset": true, 00:08:21.749 "nvme_admin": false, 00:08:21.749 "nvme_io": false, 00:08:21.749 "nvme_io_md": false, 00:08:21.749 "write_zeroes": true, 00:08:21.749 "zcopy": false, 00:08:21.749 "get_zone_info": false, 00:08:21.749 "zone_management": false, 00:08:21.749 "zone_append": false, 00:08:21.749 "compare": false, 00:08:21.749 "compare_and_write": false, 00:08:21.749 "abort": false, 00:08:21.749 "seek_hole": false, 00:08:21.749 "seek_data": false, 00:08:21.749 "copy": false, 00:08:21.749 "nvme_iov_md": false 00:08:21.749 }, 00:08:21.749 "memory_domains": [ 00:08:21.749 { 00:08:21.749 "dma_device_id": "system", 00:08:21.749 "dma_device_type": 1 00:08:21.749 }, 00:08:21.749 { 00:08:21.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.749 "dma_device_type": 2 00:08:21.749 }, 00:08:21.749 { 00:08:21.749 "dma_device_id": "system", 00:08:21.749 "dma_device_type": 1 00:08:21.749 }, 00:08:21.749 { 00:08:21.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.749 "dma_device_type": 2 00:08:21.749 }, 00:08:21.749 { 00:08:21.749 "dma_device_id": "system", 00:08:21.749 "dma_device_type": 1 00:08:21.749 }, 00:08:21.749 { 00:08:21.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.749 "dma_device_type": 2 00:08:21.749 } 00:08:21.749 ], 00:08:21.749 "driver_specific": { 00:08:21.749 "raid": { 00:08:21.749 "uuid": "3afca109-90cc-4d2d-a9aa-421e7bc748d2", 00:08:21.749 "strip_size_kb": 64, 00:08:21.749 "state": "online", 00:08:21.749 "raid_level": "raid0", 00:08:21.749 "superblock": false, 00:08:21.749 "num_base_bdevs": 3, 00:08:21.749 "num_base_bdevs_discovered": 3, 00:08:21.749 "num_base_bdevs_operational": 3, 00:08:21.750 "base_bdevs_list": [ 00:08:21.750 { 00:08:21.750 "name": "NewBaseBdev", 00:08:21.750 "uuid": "6f405dcf-08ee-4514-be00-87501a8c6acf", 00:08:21.750 "is_configured": true, 00:08:21.750 "data_offset": 0, 00:08:21.750 "data_size": 65536 00:08:21.750 }, 00:08:21.750 { 00:08:21.750 "name": "BaseBdev2", 00:08:21.750 "uuid": "3f5d95b4-7d17-4778-acb3-82b64bf04b61", 00:08:21.750 "is_configured": true, 00:08:21.750 "data_offset": 0, 00:08:21.750 "data_size": 65536 00:08:21.750 }, 00:08:21.750 { 00:08:21.750 "name": "BaseBdev3", 00:08:21.750 "uuid": "3fd51588-24d5-475d-a023-723ea8e1bbbe", 00:08:21.750 "is_configured": true, 00:08:21.750 "data_offset": 0, 00:08:21.750 "data_size": 65536 00:08:21.750 } 00:08:21.750 ] 00:08:21.750 } 00:08:21.750 } 00:08:21.750 }' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:21.750 BaseBdev2 00:08:21.750 BaseBdev3' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.750 [2024-12-12 09:21:55.761235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.750 [2024-12-12 09:21:55.761280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.750 [2024-12-12 09:21:55.761396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.750 [2024-12-12 09:21:55.761461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.750 [2024-12-12 09:21:55.761474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64969 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64969 ']' 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64969 00:08:21.750 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64969 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64969' 00:08:22.009 killing process with pid 64969 00:08:22.009 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64969 00:08:22.010 [2024-12-12 09:21:55.800544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.010 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64969 00:08:22.269 [2024-12-12 09:21:56.125687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:23.652 00:08:23.652 real 0m10.578s 00:08:23.652 user 0m16.538s 00:08:23.652 sys 0m1.936s 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.652 ************************************ 00:08:23.652 END TEST raid_state_function_test 00:08:23.652 ************************************ 00:08:23.652 09:21:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:23.652 09:21:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.652 09:21:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.652 09:21:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.652 ************************************ 00:08:23.652 START TEST raid_state_function_test_sb 00:08:23.652 ************************************ 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:23.652 Process raid pid: 65586 00:08:23.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65586 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65586' 00:08:23.652 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65586 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65586 ']' 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.653 09:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.653 [2024-12-12 09:21:57.514281] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:23.653 [2024-12-12 09:21:57.514391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.911 [2024-12-12 09:21:57.690751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.911 [2024-12-12 09:21:57.828996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.170 [2024-12-12 09:21:58.066578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.170 [2024-12-12 09:21:58.066626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.430 [2024-12-12 09:21:58.335039] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.430 [2024-12-12 09:21:58.335188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.430 [2024-12-12 09:21:58.335219] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.430 [2024-12-12 09:21:58.335242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.430 [2024-12-12 09:21:58.335267] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.430 [2024-12-12 09:21:58.335278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.430 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.431 "name": "Existed_Raid", 00:08:24.431 "uuid": "f802f0d8-4ce6-4e64-b6f0-a64683c7ba8c", 00:08:24.431 "strip_size_kb": 64, 00:08:24.431 "state": "configuring", 00:08:24.431 "raid_level": "raid0", 00:08:24.431 "superblock": true, 00:08:24.431 "num_base_bdevs": 3, 00:08:24.431 "num_base_bdevs_discovered": 0, 00:08:24.431 "num_base_bdevs_operational": 3, 00:08:24.431 "base_bdevs_list": [ 00:08:24.431 { 00:08:24.431 "name": "BaseBdev1", 00:08:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.431 "is_configured": false, 00:08:24.431 "data_offset": 0, 00:08:24.431 "data_size": 0 00:08:24.431 }, 00:08:24.431 { 00:08:24.431 "name": "BaseBdev2", 00:08:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.431 "is_configured": false, 00:08:24.431 "data_offset": 0, 00:08:24.431 "data_size": 0 00:08:24.431 }, 00:08:24.431 { 00:08:24.431 "name": "BaseBdev3", 00:08:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.431 "is_configured": false, 00:08:24.431 "data_offset": 0, 00:08:24.431 "data_size": 0 00:08:24.431 } 00:08:24.431 ] 00:08:24.431 }' 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.431 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.018 [2024-12-12 09:21:58.750278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.018 [2024-12-12 09:21:58.750407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.018 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.018 [2024-12-12 09:21:58.758264] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.018 [2024-12-12 09:21:58.758373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.019 [2024-12-12 09:21:58.758400] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.019 [2024-12-12 09:21:58.758423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.019 [2024-12-12 09:21:58.758441] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.019 [2024-12-12 09:21:58.758462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.019 [2024-12-12 09:21:58.806544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.019 BaseBdev1 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.019 [ 00:08:25.019 { 00:08:25.019 "name": "BaseBdev1", 00:08:25.019 "aliases": [ 00:08:25.019 "e4921b10-9a9a-4630-868f-a0bdc3ce86af" 00:08:25.019 ], 00:08:25.019 "product_name": "Malloc disk", 00:08:25.019 "block_size": 512, 00:08:25.019 "num_blocks": 65536, 00:08:25.019 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:25.019 "assigned_rate_limits": { 00:08:25.019 "rw_ios_per_sec": 0, 00:08:25.019 "rw_mbytes_per_sec": 0, 00:08:25.019 "r_mbytes_per_sec": 0, 00:08:25.019 "w_mbytes_per_sec": 0 00:08:25.019 }, 00:08:25.019 "claimed": true, 00:08:25.019 "claim_type": "exclusive_write", 00:08:25.019 "zoned": false, 00:08:25.019 "supported_io_types": { 00:08:25.019 "read": true, 00:08:25.019 "write": true, 00:08:25.019 "unmap": true, 00:08:25.019 "flush": true, 00:08:25.019 "reset": true, 00:08:25.019 "nvme_admin": false, 00:08:25.019 "nvme_io": false, 00:08:25.019 "nvme_io_md": false, 00:08:25.019 "write_zeroes": true, 00:08:25.019 "zcopy": true, 00:08:25.019 "get_zone_info": false, 00:08:25.019 "zone_management": false, 00:08:25.019 "zone_append": false, 00:08:25.019 "compare": false, 00:08:25.019 "compare_and_write": false, 00:08:25.019 "abort": true, 00:08:25.019 "seek_hole": false, 00:08:25.019 "seek_data": false, 00:08:25.019 "copy": true, 00:08:25.019 "nvme_iov_md": false 00:08:25.019 }, 00:08:25.019 "memory_domains": [ 00:08:25.019 { 00:08:25.019 "dma_device_id": "system", 00:08:25.019 "dma_device_type": 1 00:08:25.019 }, 00:08:25.019 { 00:08:25.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.019 "dma_device_type": 2 00:08:25.019 } 00:08:25.019 ], 00:08:25.019 "driver_specific": {} 00:08:25.019 } 00:08:25.019 ] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.019 "name": "Existed_Raid", 00:08:25.019 "uuid": "d6dec04b-383d-448e-9e0e-f6880009c17e", 00:08:25.019 "strip_size_kb": 64, 00:08:25.019 "state": "configuring", 00:08:25.019 "raid_level": "raid0", 00:08:25.019 "superblock": true, 00:08:25.019 "num_base_bdevs": 3, 00:08:25.019 "num_base_bdevs_discovered": 1, 00:08:25.019 "num_base_bdevs_operational": 3, 00:08:25.019 "base_bdevs_list": [ 00:08:25.019 { 00:08:25.019 "name": "BaseBdev1", 00:08:25.019 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:25.019 "is_configured": true, 00:08:25.019 "data_offset": 2048, 00:08:25.019 "data_size": 63488 00:08:25.019 }, 00:08:25.019 { 00:08:25.019 "name": "BaseBdev2", 00:08:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.019 "is_configured": false, 00:08:25.019 "data_offset": 0, 00:08:25.019 "data_size": 0 00:08:25.019 }, 00:08:25.019 { 00:08:25.019 "name": "BaseBdev3", 00:08:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.019 "is_configured": false, 00:08:25.019 "data_offset": 0, 00:08:25.019 "data_size": 0 00:08:25.019 } 00:08:25.019 ] 00:08:25.019 }' 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.019 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.279 [2024-12-12 09:21:59.253867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.279 [2024-12-12 09:21:59.254060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.279 [2024-12-12 09:21:59.261915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.279 [2024-12-12 09:21:59.264178] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.279 [2024-12-12 09:21:59.264263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.279 [2024-12-12 09:21:59.264292] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.279 [2024-12-12 09:21:59.264315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.279 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.280 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.539 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.539 "name": "Existed_Raid", 00:08:25.539 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:25.539 "strip_size_kb": 64, 00:08:25.539 "state": "configuring", 00:08:25.539 "raid_level": "raid0", 00:08:25.539 "superblock": true, 00:08:25.539 "num_base_bdevs": 3, 00:08:25.539 "num_base_bdevs_discovered": 1, 00:08:25.539 "num_base_bdevs_operational": 3, 00:08:25.539 "base_bdevs_list": [ 00:08:25.539 { 00:08:25.539 "name": "BaseBdev1", 00:08:25.539 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:25.539 "is_configured": true, 00:08:25.539 "data_offset": 2048, 00:08:25.539 "data_size": 63488 00:08:25.539 }, 00:08:25.539 { 00:08:25.539 "name": "BaseBdev2", 00:08:25.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.539 "is_configured": false, 00:08:25.539 "data_offset": 0, 00:08:25.539 "data_size": 0 00:08:25.539 }, 00:08:25.539 { 00:08:25.539 "name": "BaseBdev3", 00:08:25.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.539 "is_configured": false, 00:08:25.539 "data_offset": 0, 00:08:25.539 "data_size": 0 00:08:25.539 } 00:08:25.539 ] 00:08:25.539 }' 00:08:25.539 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.539 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.799 [2024-12-12 09:21:59.760820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.799 BaseBdev2 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.799 [ 00:08:25.799 { 00:08:25.799 "name": "BaseBdev2", 00:08:25.799 "aliases": [ 00:08:25.799 "d20f39c9-604e-4c60-b729-ce9eda107d06" 00:08:25.799 ], 00:08:25.799 "product_name": "Malloc disk", 00:08:25.799 "block_size": 512, 00:08:25.799 "num_blocks": 65536, 00:08:25.799 "uuid": "d20f39c9-604e-4c60-b729-ce9eda107d06", 00:08:25.799 "assigned_rate_limits": { 00:08:25.799 "rw_ios_per_sec": 0, 00:08:25.799 "rw_mbytes_per_sec": 0, 00:08:25.799 "r_mbytes_per_sec": 0, 00:08:25.799 "w_mbytes_per_sec": 0 00:08:25.799 }, 00:08:25.799 "claimed": true, 00:08:25.799 "claim_type": "exclusive_write", 00:08:25.799 "zoned": false, 00:08:25.799 "supported_io_types": { 00:08:25.799 "read": true, 00:08:25.799 "write": true, 00:08:25.799 "unmap": true, 00:08:25.799 "flush": true, 00:08:25.799 "reset": true, 00:08:25.799 "nvme_admin": false, 00:08:25.799 "nvme_io": false, 00:08:25.799 "nvme_io_md": false, 00:08:25.799 "write_zeroes": true, 00:08:25.799 "zcopy": true, 00:08:25.799 "get_zone_info": false, 00:08:25.799 "zone_management": false, 00:08:25.799 "zone_append": false, 00:08:25.799 "compare": false, 00:08:25.799 "compare_and_write": false, 00:08:25.799 "abort": true, 00:08:25.799 "seek_hole": false, 00:08:25.799 "seek_data": false, 00:08:25.799 "copy": true, 00:08:25.799 "nvme_iov_md": false 00:08:25.799 }, 00:08:25.799 "memory_domains": [ 00:08:25.799 { 00:08:25.799 "dma_device_id": "system", 00:08:25.799 "dma_device_type": 1 00:08:25.799 }, 00:08:25.799 { 00:08:25.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.799 "dma_device_type": 2 00:08:25.799 } 00:08:25.799 ], 00:08:25.799 "driver_specific": {} 00:08:25.799 } 00:08:25.799 ] 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.799 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.059 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.059 "name": "Existed_Raid", 00:08:26.059 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:26.059 "strip_size_kb": 64, 00:08:26.059 "state": "configuring", 00:08:26.059 "raid_level": "raid0", 00:08:26.059 "superblock": true, 00:08:26.059 "num_base_bdevs": 3, 00:08:26.059 "num_base_bdevs_discovered": 2, 00:08:26.059 "num_base_bdevs_operational": 3, 00:08:26.059 "base_bdevs_list": [ 00:08:26.059 { 00:08:26.059 "name": "BaseBdev1", 00:08:26.059 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:26.059 "is_configured": true, 00:08:26.059 "data_offset": 2048, 00:08:26.059 "data_size": 63488 00:08:26.059 }, 00:08:26.059 { 00:08:26.059 "name": "BaseBdev2", 00:08:26.059 "uuid": "d20f39c9-604e-4c60-b729-ce9eda107d06", 00:08:26.059 "is_configured": true, 00:08:26.059 "data_offset": 2048, 00:08:26.059 "data_size": 63488 00:08:26.059 }, 00:08:26.059 { 00:08:26.059 "name": "BaseBdev3", 00:08:26.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.059 "is_configured": false, 00:08:26.059 "data_offset": 0, 00:08:26.059 "data_size": 0 00:08:26.059 } 00:08:26.059 ] 00:08:26.059 }' 00:08:26.059 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.059 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 [2024-12-12 09:22:00.284692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.319 [2024-12-12 09:22:00.285083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:26.319 [2024-12-12 09:22:00.285149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.319 BaseBdev3 00:08:26.319 [2024-12-12 09:22:00.285477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.319 [2024-12-12 09:22:00.285691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:26.319 [2024-12-12 09:22:00.285738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:26.319 [2024-12-12 09:22:00.285905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 [ 00:08:26.319 { 00:08:26.319 "name": "BaseBdev3", 00:08:26.319 "aliases": [ 00:08:26.319 "33908870-64c4-4702-a48c-4c7724961703" 00:08:26.319 ], 00:08:26.319 "product_name": "Malloc disk", 00:08:26.319 "block_size": 512, 00:08:26.319 "num_blocks": 65536, 00:08:26.319 "uuid": "33908870-64c4-4702-a48c-4c7724961703", 00:08:26.319 "assigned_rate_limits": { 00:08:26.319 "rw_ios_per_sec": 0, 00:08:26.319 "rw_mbytes_per_sec": 0, 00:08:26.319 "r_mbytes_per_sec": 0, 00:08:26.319 "w_mbytes_per_sec": 0 00:08:26.319 }, 00:08:26.319 "claimed": true, 00:08:26.319 "claim_type": "exclusive_write", 00:08:26.319 "zoned": false, 00:08:26.319 "supported_io_types": { 00:08:26.319 "read": true, 00:08:26.319 "write": true, 00:08:26.319 "unmap": true, 00:08:26.319 "flush": true, 00:08:26.319 "reset": true, 00:08:26.319 "nvme_admin": false, 00:08:26.319 "nvme_io": false, 00:08:26.319 "nvme_io_md": false, 00:08:26.319 "write_zeroes": true, 00:08:26.319 "zcopy": true, 00:08:26.319 "get_zone_info": false, 00:08:26.319 "zone_management": false, 00:08:26.319 "zone_append": false, 00:08:26.319 "compare": false, 00:08:26.319 "compare_and_write": false, 00:08:26.319 "abort": true, 00:08:26.319 "seek_hole": false, 00:08:26.319 "seek_data": false, 00:08:26.319 "copy": true, 00:08:26.319 "nvme_iov_md": false 00:08:26.319 }, 00:08:26.319 "memory_domains": [ 00:08:26.319 { 00:08:26.319 "dma_device_id": "system", 00:08:26.319 "dma_device_type": 1 00:08:26.319 }, 00:08:26.319 { 00:08:26.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.319 "dma_device_type": 2 00:08:26.319 } 00:08:26.319 ], 00:08:26.319 "driver_specific": {} 00:08:26.319 } 00:08:26.319 ] 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.579 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.579 "name": "Existed_Raid", 00:08:26.579 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:26.579 "strip_size_kb": 64, 00:08:26.579 "state": "online", 00:08:26.579 "raid_level": "raid0", 00:08:26.579 "superblock": true, 00:08:26.579 "num_base_bdevs": 3, 00:08:26.579 "num_base_bdevs_discovered": 3, 00:08:26.579 "num_base_bdevs_operational": 3, 00:08:26.579 "base_bdevs_list": [ 00:08:26.579 { 00:08:26.579 "name": "BaseBdev1", 00:08:26.579 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:26.579 "is_configured": true, 00:08:26.579 "data_offset": 2048, 00:08:26.579 "data_size": 63488 00:08:26.579 }, 00:08:26.579 { 00:08:26.579 "name": "BaseBdev2", 00:08:26.579 "uuid": "d20f39c9-604e-4c60-b729-ce9eda107d06", 00:08:26.579 "is_configured": true, 00:08:26.579 "data_offset": 2048, 00:08:26.579 "data_size": 63488 00:08:26.579 }, 00:08:26.579 { 00:08:26.579 "name": "BaseBdev3", 00:08:26.579 "uuid": "33908870-64c4-4702-a48c-4c7724961703", 00:08:26.579 "is_configured": true, 00:08:26.579 "data_offset": 2048, 00:08:26.579 "data_size": 63488 00:08:26.579 } 00:08:26.579 ] 00:08:26.579 }' 00:08:26.579 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.579 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.838 [2024-12-12 09:22:00.716388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.838 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.838 "name": "Existed_Raid", 00:08:26.838 "aliases": [ 00:08:26.838 "91a2a6b9-e27d-4e97-b251-a0959c401ead" 00:08:26.838 ], 00:08:26.838 "product_name": "Raid Volume", 00:08:26.838 "block_size": 512, 00:08:26.838 "num_blocks": 190464, 00:08:26.838 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:26.838 "assigned_rate_limits": { 00:08:26.838 "rw_ios_per_sec": 0, 00:08:26.838 "rw_mbytes_per_sec": 0, 00:08:26.838 "r_mbytes_per_sec": 0, 00:08:26.838 "w_mbytes_per_sec": 0 00:08:26.838 }, 00:08:26.838 "claimed": false, 00:08:26.838 "zoned": false, 00:08:26.838 "supported_io_types": { 00:08:26.838 "read": true, 00:08:26.838 "write": true, 00:08:26.838 "unmap": true, 00:08:26.838 "flush": true, 00:08:26.838 "reset": true, 00:08:26.838 "nvme_admin": false, 00:08:26.838 "nvme_io": false, 00:08:26.838 "nvme_io_md": false, 00:08:26.838 "write_zeroes": true, 00:08:26.838 "zcopy": false, 00:08:26.838 "get_zone_info": false, 00:08:26.838 "zone_management": false, 00:08:26.838 "zone_append": false, 00:08:26.838 "compare": false, 00:08:26.838 "compare_and_write": false, 00:08:26.838 "abort": false, 00:08:26.838 "seek_hole": false, 00:08:26.838 "seek_data": false, 00:08:26.838 "copy": false, 00:08:26.838 "nvme_iov_md": false 00:08:26.838 }, 00:08:26.838 "memory_domains": [ 00:08:26.838 { 00:08:26.838 "dma_device_id": "system", 00:08:26.838 "dma_device_type": 1 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.838 "dma_device_type": 2 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "dma_device_id": "system", 00:08:26.838 "dma_device_type": 1 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.838 "dma_device_type": 2 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "dma_device_id": "system", 00:08:26.839 "dma_device_type": 1 00:08:26.839 }, 00:08:26.839 { 00:08:26.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.839 "dma_device_type": 2 00:08:26.839 } 00:08:26.839 ], 00:08:26.839 "driver_specific": { 00:08:26.839 "raid": { 00:08:26.839 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:26.839 "strip_size_kb": 64, 00:08:26.839 "state": "online", 00:08:26.839 "raid_level": "raid0", 00:08:26.839 "superblock": true, 00:08:26.839 "num_base_bdevs": 3, 00:08:26.839 "num_base_bdevs_discovered": 3, 00:08:26.839 "num_base_bdevs_operational": 3, 00:08:26.839 "base_bdevs_list": [ 00:08:26.839 { 00:08:26.839 "name": "BaseBdev1", 00:08:26.839 "uuid": "e4921b10-9a9a-4630-868f-a0bdc3ce86af", 00:08:26.839 "is_configured": true, 00:08:26.839 "data_offset": 2048, 00:08:26.839 "data_size": 63488 00:08:26.839 }, 00:08:26.839 { 00:08:26.839 "name": "BaseBdev2", 00:08:26.839 "uuid": "d20f39c9-604e-4c60-b729-ce9eda107d06", 00:08:26.839 "is_configured": true, 00:08:26.839 "data_offset": 2048, 00:08:26.839 "data_size": 63488 00:08:26.839 }, 00:08:26.839 { 00:08:26.839 "name": "BaseBdev3", 00:08:26.839 "uuid": "33908870-64c4-4702-a48c-4c7724961703", 00:08:26.839 "is_configured": true, 00:08:26.839 "data_offset": 2048, 00:08:26.839 "data_size": 63488 00:08:26.839 } 00:08:26.839 ] 00:08:26.839 } 00:08:26.839 } 00:08:26.839 }' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.839 BaseBdev2 00:08:26.839 BaseBdev3' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.839 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.098 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 [2024-12-12 09:22:00.987669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.098 [2024-12-12 09:22:00.987781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.098 [2024-12-12 09:22:00.987867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.098 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.099 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.357 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.357 "name": "Existed_Raid", 00:08:27.357 "uuid": "91a2a6b9-e27d-4e97-b251-a0959c401ead", 00:08:27.357 "strip_size_kb": 64, 00:08:27.357 "state": "offline", 00:08:27.357 "raid_level": "raid0", 00:08:27.357 "superblock": true, 00:08:27.357 "num_base_bdevs": 3, 00:08:27.357 "num_base_bdevs_discovered": 2, 00:08:27.357 "num_base_bdevs_operational": 2, 00:08:27.357 "base_bdevs_list": [ 00:08:27.357 { 00:08:27.357 "name": null, 00:08:27.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.357 "is_configured": false, 00:08:27.357 "data_offset": 0, 00:08:27.357 "data_size": 63488 00:08:27.357 }, 00:08:27.357 { 00:08:27.357 "name": "BaseBdev2", 00:08:27.357 "uuid": "d20f39c9-604e-4c60-b729-ce9eda107d06", 00:08:27.357 "is_configured": true, 00:08:27.357 "data_offset": 2048, 00:08:27.358 "data_size": 63488 00:08:27.358 }, 00:08:27.358 { 00:08:27.358 "name": "BaseBdev3", 00:08:27.358 "uuid": "33908870-64c4-4702-a48c-4c7724961703", 00:08:27.358 "is_configured": true, 00:08:27.358 "data_offset": 2048, 00:08:27.358 "data_size": 63488 00:08:27.358 } 00:08:27.358 ] 00:08:27.358 }' 00:08:27.358 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.358 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.616 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.616 [2024-12-12 09:22:01.548739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.875 [2024-12-12 09:22:01.710676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:27.875 [2024-12-12 09:22:01.710816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:27.875 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.876 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.876 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.876 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.135 BaseBdev2 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.135 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.135 [ 00:08:28.135 { 00:08:28.135 "name": "BaseBdev2", 00:08:28.135 "aliases": [ 00:08:28.135 "218e251a-5bc2-492f-ba50-81e6876fae4f" 00:08:28.135 ], 00:08:28.135 "product_name": "Malloc disk", 00:08:28.135 "block_size": 512, 00:08:28.135 "num_blocks": 65536, 00:08:28.135 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:28.135 "assigned_rate_limits": { 00:08:28.135 "rw_ios_per_sec": 0, 00:08:28.135 "rw_mbytes_per_sec": 0, 00:08:28.135 "r_mbytes_per_sec": 0, 00:08:28.135 "w_mbytes_per_sec": 0 00:08:28.135 }, 00:08:28.135 "claimed": false, 00:08:28.135 "zoned": false, 00:08:28.135 "supported_io_types": { 00:08:28.135 "read": true, 00:08:28.135 "write": true, 00:08:28.135 "unmap": true, 00:08:28.135 "flush": true, 00:08:28.135 "reset": true, 00:08:28.135 "nvme_admin": false, 00:08:28.136 "nvme_io": false, 00:08:28.136 "nvme_io_md": false, 00:08:28.136 "write_zeroes": true, 00:08:28.136 "zcopy": true, 00:08:28.136 "get_zone_info": false, 00:08:28.136 "zone_management": false, 00:08:28.136 "zone_append": false, 00:08:28.136 "compare": false, 00:08:28.136 "compare_and_write": false, 00:08:28.136 "abort": true, 00:08:28.136 "seek_hole": false, 00:08:28.136 "seek_data": false, 00:08:28.136 "copy": true, 00:08:28.136 "nvme_iov_md": false 00:08:28.136 }, 00:08:28.136 "memory_domains": [ 00:08:28.136 { 00:08:28.136 "dma_device_id": "system", 00:08:28.136 "dma_device_type": 1 00:08:28.136 }, 00:08:28.136 { 00:08:28.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.136 "dma_device_type": 2 00:08:28.136 } 00:08:28.136 ], 00:08:28.136 "driver_specific": {} 00:08:28.136 } 00:08:28.136 ] 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.136 BaseBdev3 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.136 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.136 [ 00:08:28.136 { 00:08:28.136 "name": "BaseBdev3", 00:08:28.136 "aliases": [ 00:08:28.136 "410b4b48-7c51-427f-82df-5fc8b006638c" 00:08:28.136 ], 00:08:28.136 "product_name": "Malloc disk", 00:08:28.136 "block_size": 512, 00:08:28.136 "num_blocks": 65536, 00:08:28.136 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:28.136 "assigned_rate_limits": { 00:08:28.136 "rw_ios_per_sec": 0, 00:08:28.136 "rw_mbytes_per_sec": 0, 00:08:28.136 "r_mbytes_per_sec": 0, 00:08:28.136 "w_mbytes_per_sec": 0 00:08:28.136 }, 00:08:28.136 "claimed": false, 00:08:28.136 "zoned": false, 00:08:28.136 "supported_io_types": { 00:08:28.136 "read": true, 00:08:28.136 "write": true, 00:08:28.136 "unmap": true, 00:08:28.136 "flush": true, 00:08:28.136 "reset": true, 00:08:28.136 "nvme_admin": false, 00:08:28.136 "nvme_io": false, 00:08:28.136 "nvme_io_md": false, 00:08:28.136 "write_zeroes": true, 00:08:28.136 "zcopy": true, 00:08:28.136 "get_zone_info": false, 00:08:28.136 "zone_management": false, 00:08:28.136 "zone_append": false, 00:08:28.136 "compare": false, 00:08:28.136 "compare_and_write": false, 00:08:28.136 "abort": true, 00:08:28.136 "seek_hole": false, 00:08:28.136 "seek_data": false, 00:08:28.136 "copy": true, 00:08:28.136 "nvme_iov_md": false 00:08:28.136 }, 00:08:28.136 "memory_domains": [ 00:08:28.136 { 00:08:28.136 "dma_device_id": "system", 00:08:28.136 "dma_device_type": 1 00:08:28.136 }, 00:08:28.136 { 00:08:28.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.136 "dma_device_type": 2 00:08:28.136 } 00:08:28.136 ], 00:08:28.136 "driver_specific": {} 00:08:28.136 } 00:08:28.136 ] 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.136 [2024-12-12 09:22:02.003950] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.136 [2024-12-12 09:22:02.004091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.136 [2024-12-12 09:22:02.004138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.136 [2024-12-12 09:22:02.006231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.136 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.137 "name": "Existed_Raid", 00:08:28.137 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:28.137 "strip_size_kb": 64, 00:08:28.137 "state": "configuring", 00:08:28.137 "raid_level": "raid0", 00:08:28.137 "superblock": true, 00:08:28.137 "num_base_bdevs": 3, 00:08:28.137 "num_base_bdevs_discovered": 2, 00:08:28.137 "num_base_bdevs_operational": 3, 00:08:28.137 "base_bdevs_list": [ 00:08:28.137 { 00:08:28.137 "name": "BaseBdev1", 00:08:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.137 "is_configured": false, 00:08:28.137 "data_offset": 0, 00:08:28.137 "data_size": 0 00:08:28.137 }, 00:08:28.137 { 00:08:28.137 "name": "BaseBdev2", 00:08:28.137 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:28.137 "is_configured": true, 00:08:28.137 "data_offset": 2048, 00:08:28.137 "data_size": 63488 00:08:28.137 }, 00:08:28.137 { 00:08:28.137 "name": "BaseBdev3", 00:08:28.137 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:28.137 "is_configured": true, 00:08:28.137 "data_offset": 2048, 00:08:28.137 "data_size": 63488 00:08:28.137 } 00:08:28.137 ] 00:08:28.137 }' 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.137 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.705 [2024-12-12 09:22:02.435667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.705 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.705 "name": "Existed_Raid", 00:08:28.705 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:28.705 "strip_size_kb": 64, 00:08:28.705 "state": "configuring", 00:08:28.705 "raid_level": "raid0", 00:08:28.705 "superblock": true, 00:08:28.705 "num_base_bdevs": 3, 00:08:28.705 "num_base_bdevs_discovered": 1, 00:08:28.705 "num_base_bdevs_operational": 3, 00:08:28.705 "base_bdevs_list": [ 00:08:28.705 { 00:08:28.705 "name": "BaseBdev1", 00:08:28.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.705 "is_configured": false, 00:08:28.705 "data_offset": 0, 00:08:28.705 "data_size": 0 00:08:28.705 }, 00:08:28.705 { 00:08:28.705 "name": null, 00:08:28.705 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:28.705 "is_configured": false, 00:08:28.705 "data_offset": 0, 00:08:28.705 "data_size": 63488 00:08:28.705 }, 00:08:28.705 { 00:08:28.705 "name": "BaseBdev3", 00:08:28.705 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:28.705 "is_configured": true, 00:08:28.705 "data_offset": 2048, 00:08:28.706 "data_size": 63488 00:08:28.706 } 00:08:28.706 ] 00:08:28.706 }' 00:08:28.706 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.706 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 BaseBdev1 00:08:28.965 [2024-12-12 09:22:02.970445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.965 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 [ 00:08:28.965 { 00:08:28.965 "name": "BaseBdev1", 00:08:28.965 "aliases": [ 00:08:28.965 "1e032c39-f50e-4d72-b3b9-0767d6a0d00a" 00:08:28.965 ], 00:08:28.965 "product_name": "Malloc disk", 00:08:28.965 "block_size": 512, 00:08:28.965 "num_blocks": 65536, 00:08:28.965 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:28.965 "assigned_rate_limits": { 00:08:28.965 "rw_ios_per_sec": 0, 00:08:28.965 "rw_mbytes_per_sec": 0, 00:08:28.965 "r_mbytes_per_sec": 0, 00:08:28.965 "w_mbytes_per_sec": 0 00:08:28.965 }, 00:08:28.965 "claimed": true, 00:08:28.965 "claim_type": "exclusive_write", 00:08:28.965 "zoned": false, 00:08:28.965 "supported_io_types": { 00:08:28.965 "read": true, 00:08:28.965 "write": true, 00:08:29.225 "unmap": true, 00:08:29.225 "flush": true, 00:08:29.225 "reset": true, 00:08:29.225 "nvme_admin": false, 00:08:29.225 "nvme_io": false, 00:08:29.225 "nvme_io_md": false, 00:08:29.225 "write_zeroes": true, 00:08:29.225 "zcopy": true, 00:08:29.225 "get_zone_info": false, 00:08:29.225 "zone_management": false, 00:08:29.225 "zone_append": false, 00:08:29.225 "compare": false, 00:08:29.225 "compare_and_write": false, 00:08:29.225 "abort": true, 00:08:29.225 "seek_hole": false, 00:08:29.225 "seek_data": false, 00:08:29.225 "copy": true, 00:08:29.225 "nvme_iov_md": false 00:08:29.225 }, 00:08:29.225 "memory_domains": [ 00:08:29.225 { 00:08:29.225 "dma_device_id": "system", 00:08:29.225 "dma_device_type": 1 00:08:29.225 }, 00:08:29.225 { 00:08:29.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.225 "dma_device_type": 2 00:08:29.225 } 00:08:29.225 ], 00:08:29.225 "driver_specific": {} 00:08:29.225 } 00:08:29.225 ] 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.225 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.225 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.225 "name": "Existed_Raid", 00:08:29.225 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:29.225 "strip_size_kb": 64, 00:08:29.225 "state": "configuring", 00:08:29.225 "raid_level": "raid0", 00:08:29.225 "superblock": true, 00:08:29.225 "num_base_bdevs": 3, 00:08:29.225 "num_base_bdevs_discovered": 2, 00:08:29.225 "num_base_bdevs_operational": 3, 00:08:29.225 "base_bdevs_list": [ 00:08:29.225 { 00:08:29.225 "name": "BaseBdev1", 00:08:29.225 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:29.225 "is_configured": true, 00:08:29.225 "data_offset": 2048, 00:08:29.225 "data_size": 63488 00:08:29.225 }, 00:08:29.225 { 00:08:29.225 "name": null, 00:08:29.225 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:29.225 "is_configured": false, 00:08:29.225 "data_offset": 0, 00:08:29.225 "data_size": 63488 00:08:29.225 }, 00:08:29.225 { 00:08:29.225 "name": "BaseBdev3", 00:08:29.225 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:29.225 "is_configured": true, 00:08:29.225 "data_offset": 2048, 00:08:29.225 "data_size": 63488 00:08:29.225 } 00:08:29.225 ] 00:08:29.225 }' 00:08:29.225 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.225 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.485 [2024-12-12 09:22:03.417786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.485 "name": "Existed_Raid", 00:08:29.485 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:29.485 "strip_size_kb": 64, 00:08:29.485 "state": "configuring", 00:08:29.485 "raid_level": "raid0", 00:08:29.485 "superblock": true, 00:08:29.485 "num_base_bdevs": 3, 00:08:29.485 "num_base_bdevs_discovered": 1, 00:08:29.485 "num_base_bdevs_operational": 3, 00:08:29.485 "base_bdevs_list": [ 00:08:29.485 { 00:08:29.485 "name": "BaseBdev1", 00:08:29.485 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:29.485 "is_configured": true, 00:08:29.485 "data_offset": 2048, 00:08:29.485 "data_size": 63488 00:08:29.485 }, 00:08:29.485 { 00:08:29.485 "name": null, 00:08:29.485 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:29.485 "is_configured": false, 00:08:29.485 "data_offset": 0, 00:08:29.485 "data_size": 63488 00:08:29.485 }, 00:08:29.485 { 00:08:29.485 "name": null, 00:08:29.485 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:29.485 "is_configured": false, 00:08:29.485 "data_offset": 0, 00:08:29.485 "data_size": 63488 00:08:29.485 } 00:08:29.485 ] 00:08:29.485 }' 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.485 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 [2024-12-12 09:22:03.889018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.054 "name": "Existed_Raid", 00:08:30.054 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:30.054 "strip_size_kb": 64, 00:08:30.054 "state": "configuring", 00:08:30.054 "raid_level": "raid0", 00:08:30.054 "superblock": true, 00:08:30.054 "num_base_bdevs": 3, 00:08:30.054 "num_base_bdevs_discovered": 2, 00:08:30.054 "num_base_bdevs_operational": 3, 00:08:30.054 "base_bdevs_list": [ 00:08:30.054 { 00:08:30.054 "name": "BaseBdev1", 00:08:30.054 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:30.054 "is_configured": true, 00:08:30.054 "data_offset": 2048, 00:08:30.054 "data_size": 63488 00:08:30.054 }, 00:08:30.054 { 00:08:30.054 "name": null, 00:08:30.054 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:30.054 "is_configured": false, 00:08:30.054 "data_offset": 0, 00:08:30.054 "data_size": 63488 00:08:30.054 }, 00:08:30.054 { 00:08:30.054 "name": "BaseBdev3", 00:08:30.054 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:30.054 "is_configured": true, 00:08:30.054 "data_offset": 2048, 00:08:30.054 "data_size": 63488 00:08:30.054 } 00:08:30.054 ] 00:08:30.054 }' 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.054 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 [2024-12-12 09:22:04.396192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.624 "name": "Existed_Raid", 00:08:30.624 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:30.624 "strip_size_kb": 64, 00:08:30.624 "state": "configuring", 00:08:30.624 "raid_level": "raid0", 00:08:30.624 "superblock": true, 00:08:30.624 "num_base_bdevs": 3, 00:08:30.624 "num_base_bdevs_discovered": 1, 00:08:30.624 "num_base_bdevs_operational": 3, 00:08:30.624 "base_bdevs_list": [ 00:08:30.624 { 00:08:30.624 "name": null, 00:08:30.624 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:30.624 "is_configured": false, 00:08:30.624 "data_offset": 0, 00:08:30.624 "data_size": 63488 00:08:30.624 }, 00:08:30.624 { 00:08:30.624 "name": null, 00:08:30.624 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:30.624 "is_configured": false, 00:08:30.624 "data_offset": 0, 00:08:30.624 "data_size": 63488 00:08:30.624 }, 00:08:30.624 { 00:08:30.624 "name": "BaseBdev3", 00:08:30.624 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:30.624 "is_configured": true, 00:08:30.624 "data_offset": 2048, 00:08:30.624 "data_size": 63488 00:08:30.624 } 00:08:30.624 ] 00:08:30.624 }' 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.624 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.884 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.884 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.884 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.884 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.144 [2024-12-12 09:22:04.936123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.144 "name": "Existed_Raid", 00:08:31.144 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:31.144 "strip_size_kb": 64, 00:08:31.144 "state": "configuring", 00:08:31.144 "raid_level": "raid0", 00:08:31.144 "superblock": true, 00:08:31.144 "num_base_bdevs": 3, 00:08:31.144 "num_base_bdevs_discovered": 2, 00:08:31.144 "num_base_bdevs_operational": 3, 00:08:31.144 "base_bdevs_list": [ 00:08:31.144 { 00:08:31.144 "name": null, 00:08:31.144 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:31.144 "is_configured": false, 00:08:31.144 "data_offset": 0, 00:08:31.144 "data_size": 63488 00:08:31.144 }, 00:08:31.144 { 00:08:31.144 "name": "BaseBdev2", 00:08:31.144 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:31.144 "is_configured": true, 00:08:31.144 "data_offset": 2048, 00:08:31.144 "data_size": 63488 00:08:31.144 }, 00:08:31.144 { 00:08:31.144 "name": "BaseBdev3", 00:08:31.144 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:31.144 "is_configured": true, 00:08:31.144 "data_offset": 2048, 00:08:31.144 "data_size": 63488 00:08:31.144 } 00:08:31.144 ] 00:08:31.144 }' 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.144 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.403 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e032c39-f50e-4d72-b3b9-0767d6a0d00a 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.663 [2024-12-12 09:22:05.489047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:31.663 NewBaseBdev 00:08:31.663 [2024-12-12 09:22:05.489382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:31.663 [2024-12-12 09:22:05.489405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.663 [2024-12-12 09:22:05.489684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:31.663 [2024-12-12 09:22:05.489833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:31.663 [2024-12-12 09:22:05.489844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:31.663 [2024-12-12 09:22:05.490008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.663 [ 00:08:31.663 { 00:08:31.663 "name": "NewBaseBdev", 00:08:31.663 "aliases": [ 00:08:31.663 "1e032c39-f50e-4d72-b3b9-0767d6a0d00a" 00:08:31.663 ], 00:08:31.663 "product_name": "Malloc disk", 00:08:31.663 "block_size": 512, 00:08:31.663 "num_blocks": 65536, 00:08:31.663 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:31.663 "assigned_rate_limits": { 00:08:31.663 "rw_ios_per_sec": 0, 00:08:31.663 "rw_mbytes_per_sec": 0, 00:08:31.663 "r_mbytes_per_sec": 0, 00:08:31.663 "w_mbytes_per_sec": 0 00:08:31.663 }, 00:08:31.663 "claimed": true, 00:08:31.663 "claim_type": "exclusive_write", 00:08:31.663 "zoned": false, 00:08:31.663 "supported_io_types": { 00:08:31.663 "read": true, 00:08:31.663 "write": true, 00:08:31.663 "unmap": true, 00:08:31.663 "flush": true, 00:08:31.663 "reset": true, 00:08:31.663 "nvme_admin": false, 00:08:31.663 "nvme_io": false, 00:08:31.663 "nvme_io_md": false, 00:08:31.663 "write_zeroes": true, 00:08:31.663 "zcopy": true, 00:08:31.663 "get_zone_info": false, 00:08:31.663 "zone_management": false, 00:08:31.663 "zone_append": false, 00:08:31.663 "compare": false, 00:08:31.663 "compare_and_write": false, 00:08:31.663 "abort": true, 00:08:31.663 "seek_hole": false, 00:08:31.663 "seek_data": false, 00:08:31.663 "copy": true, 00:08:31.663 "nvme_iov_md": false 00:08:31.663 }, 00:08:31.663 "memory_domains": [ 00:08:31.663 { 00:08:31.663 "dma_device_id": "system", 00:08:31.663 "dma_device_type": 1 00:08:31.663 }, 00:08:31.663 { 00:08:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.663 "dma_device_type": 2 00:08:31.663 } 00:08:31.663 ], 00:08:31.663 "driver_specific": {} 00:08:31.663 } 00:08:31.663 ] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.663 "name": "Existed_Raid", 00:08:31.663 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:31.663 "strip_size_kb": 64, 00:08:31.663 "state": "online", 00:08:31.663 "raid_level": "raid0", 00:08:31.663 "superblock": true, 00:08:31.663 "num_base_bdevs": 3, 00:08:31.663 "num_base_bdevs_discovered": 3, 00:08:31.663 "num_base_bdevs_operational": 3, 00:08:31.663 "base_bdevs_list": [ 00:08:31.663 { 00:08:31.663 "name": "NewBaseBdev", 00:08:31.663 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:31.663 "is_configured": true, 00:08:31.663 "data_offset": 2048, 00:08:31.663 "data_size": 63488 00:08:31.663 }, 00:08:31.663 { 00:08:31.663 "name": "BaseBdev2", 00:08:31.663 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:31.663 "is_configured": true, 00:08:31.663 "data_offset": 2048, 00:08:31.663 "data_size": 63488 00:08:31.663 }, 00:08:31.663 { 00:08:31.663 "name": "BaseBdev3", 00:08:31.663 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:31.663 "is_configured": true, 00:08:31.663 "data_offset": 2048, 00:08:31.663 "data_size": 63488 00:08:31.663 } 00:08:31.663 ] 00:08:31.663 }' 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.663 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 [2024-12-12 09:22:05.956594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.233 "name": "Existed_Raid", 00:08:32.233 "aliases": [ 00:08:32.233 "f34d6c36-39c8-4c9c-b8e9-14778fde85ab" 00:08:32.233 ], 00:08:32.233 "product_name": "Raid Volume", 00:08:32.233 "block_size": 512, 00:08:32.233 "num_blocks": 190464, 00:08:32.233 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:32.233 "assigned_rate_limits": { 00:08:32.233 "rw_ios_per_sec": 0, 00:08:32.233 "rw_mbytes_per_sec": 0, 00:08:32.233 "r_mbytes_per_sec": 0, 00:08:32.233 "w_mbytes_per_sec": 0 00:08:32.233 }, 00:08:32.233 "claimed": false, 00:08:32.233 "zoned": false, 00:08:32.233 "supported_io_types": { 00:08:32.233 "read": true, 00:08:32.233 "write": true, 00:08:32.233 "unmap": true, 00:08:32.233 "flush": true, 00:08:32.233 "reset": true, 00:08:32.233 "nvme_admin": false, 00:08:32.233 "nvme_io": false, 00:08:32.233 "nvme_io_md": false, 00:08:32.233 "write_zeroes": true, 00:08:32.233 "zcopy": false, 00:08:32.233 "get_zone_info": false, 00:08:32.233 "zone_management": false, 00:08:32.233 "zone_append": false, 00:08:32.233 "compare": false, 00:08:32.233 "compare_and_write": false, 00:08:32.233 "abort": false, 00:08:32.233 "seek_hole": false, 00:08:32.233 "seek_data": false, 00:08:32.233 "copy": false, 00:08:32.233 "nvme_iov_md": false 00:08:32.233 }, 00:08:32.233 "memory_domains": [ 00:08:32.233 { 00:08:32.233 "dma_device_id": "system", 00:08:32.233 "dma_device_type": 1 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.233 "dma_device_type": 2 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "dma_device_id": "system", 00:08:32.233 "dma_device_type": 1 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.233 "dma_device_type": 2 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "dma_device_id": "system", 00:08:32.233 "dma_device_type": 1 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.233 "dma_device_type": 2 00:08:32.233 } 00:08:32.233 ], 00:08:32.233 "driver_specific": { 00:08:32.233 "raid": { 00:08:32.233 "uuid": "f34d6c36-39c8-4c9c-b8e9-14778fde85ab", 00:08:32.233 "strip_size_kb": 64, 00:08:32.233 "state": "online", 00:08:32.233 "raid_level": "raid0", 00:08:32.233 "superblock": true, 00:08:32.233 "num_base_bdevs": 3, 00:08:32.233 "num_base_bdevs_discovered": 3, 00:08:32.233 "num_base_bdevs_operational": 3, 00:08:32.233 "base_bdevs_list": [ 00:08:32.233 { 00:08:32.233 "name": "NewBaseBdev", 00:08:32.233 "uuid": "1e032c39-f50e-4d72-b3b9-0767d6a0d00a", 00:08:32.233 "is_configured": true, 00:08:32.233 "data_offset": 2048, 00:08:32.233 "data_size": 63488 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "name": "BaseBdev2", 00:08:32.233 "uuid": "218e251a-5bc2-492f-ba50-81e6876fae4f", 00:08:32.233 "is_configured": true, 00:08:32.233 "data_offset": 2048, 00:08:32.233 "data_size": 63488 00:08:32.233 }, 00:08:32.233 { 00:08:32.233 "name": "BaseBdev3", 00:08:32.233 "uuid": "410b4b48-7c51-427f-82df-5fc8b006638c", 00:08:32.233 "is_configured": true, 00:08:32.233 "data_offset": 2048, 00:08:32.233 "data_size": 63488 00:08:32.233 } 00:08:32.233 ] 00:08:32.233 } 00:08:32.233 } 00:08:32.233 }' 00:08:32.233 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:32.233 BaseBdev2 00:08:32.233 BaseBdev3' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 [2024-12-12 09:22:06.223880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.233 [2024-12-12 09:22:06.223967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.233 [2024-12-12 09:22:06.224089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.233 [2024-12-12 09:22:06.224171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.233 [2024-12-12 09:22:06.224219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65586 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65586 ']' 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65586 00:08:32.233 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:32.234 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.234 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65586 00:08:32.494 killing process with pid 65586 00:08:32.494 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.494 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.494 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65586' 00:08:32.494 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65586 00:08:32.494 [2024-12-12 09:22:06.257004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.494 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65586 00:08:32.753 [2024-12-12 09:22:06.571529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.132 ************************************ 00:08:34.132 END TEST raid_state_function_test_sb 00:08:34.132 ************************************ 00:08:34.132 09:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:34.132 00:08:34.132 real 0m10.362s 00:08:34.132 user 0m16.280s 00:08:34.132 sys 0m1.864s 00:08:34.132 09:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.132 09:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.132 09:22:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:34.132 09:22:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:34.132 09:22:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.132 09:22:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.132 ************************************ 00:08:34.132 START TEST raid_superblock_test 00:08:34.132 ************************************ 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66206 00:08:34.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66206 00:08:34.132 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66206 ']' 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.133 09:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:34.133 [2024-12-12 09:22:07.929316] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:34.133 [2024-12-12 09:22:07.929431] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66206 ] 00:08:34.133 [2024-12-12 09:22:08.081952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.392 [2024-12-12 09:22:08.220169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.651 [2024-12-12 09:22:08.456040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.651 [2024-12-12 09:22:08.456090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.910 malloc1 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.910 [2024-12-12 09:22:08.806655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.910 [2024-12-12 09:22:08.806805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.910 [2024-12-12 09:22:08.806848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:34.910 [2024-12-12 09:22:08.806879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.910 [2024-12-12 09:22:08.809345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.910 [2024-12-12 09:22:08.809418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.910 pt1 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.910 malloc2 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.910 [2024-12-12 09:22:08.869302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.910 [2024-12-12 09:22:08.869423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.910 [2024-12-12 09:22:08.869464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:34.910 [2024-12-12 09:22:08.869493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.910 [2024-12-12 09:22:08.871913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.910 [2024-12-12 09:22:08.872000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.910 pt2 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.910 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.169 malloc3 00:08:35.169 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.169 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:35.169 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.169 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.169 [2024-12-12 09:22:08.942453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:35.170 [2024-12-12 09:22:08.942594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.170 [2024-12-12 09:22:08.942635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:35.170 [2024-12-12 09:22:08.942663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.170 [2024-12-12 09:22:08.945161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.170 [2024-12-12 09:22:08.945254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:35.170 pt3 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.170 [2024-12-12 09:22:08.954480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.170 [2024-12-12 09:22:08.956632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.170 [2024-12-12 09:22:08.956703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:35.170 [2024-12-12 09:22:08.956873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:35.170 [2024-12-12 09:22:08.956886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.170 [2024-12-12 09:22:08.957215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:35.170 [2024-12-12 09:22:08.957398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:35.170 [2024-12-12 09:22:08.957414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:35.170 [2024-12-12 09:22:08.957606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.170 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.170 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.170 "name": "raid_bdev1", 00:08:35.170 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:35.170 "strip_size_kb": 64, 00:08:35.170 "state": "online", 00:08:35.170 "raid_level": "raid0", 00:08:35.170 "superblock": true, 00:08:35.170 "num_base_bdevs": 3, 00:08:35.170 "num_base_bdevs_discovered": 3, 00:08:35.170 "num_base_bdevs_operational": 3, 00:08:35.170 "base_bdevs_list": [ 00:08:35.170 { 00:08:35.170 "name": "pt1", 00:08:35.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.170 "is_configured": true, 00:08:35.170 "data_offset": 2048, 00:08:35.170 "data_size": 63488 00:08:35.170 }, 00:08:35.170 { 00:08:35.170 "name": "pt2", 00:08:35.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.170 "is_configured": true, 00:08:35.170 "data_offset": 2048, 00:08:35.170 "data_size": 63488 00:08:35.170 }, 00:08:35.170 { 00:08:35.170 "name": "pt3", 00:08:35.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.170 "is_configured": true, 00:08:35.170 "data_offset": 2048, 00:08:35.170 "data_size": 63488 00:08:35.170 } 00:08:35.170 ] 00:08:35.170 }' 00:08:35.170 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.170 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.436 [2024-12-12 09:22:09.434053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.436 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.707 "name": "raid_bdev1", 00:08:35.707 "aliases": [ 00:08:35.707 "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2" 00:08:35.707 ], 00:08:35.707 "product_name": "Raid Volume", 00:08:35.707 "block_size": 512, 00:08:35.707 "num_blocks": 190464, 00:08:35.707 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:35.707 "assigned_rate_limits": { 00:08:35.707 "rw_ios_per_sec": 0, 00:08:35.707 "rw_mbytes_per_sec": 0, 00:08:35.707 "r_mbytes_per_sec": 0, 00:08:35.707 "w_mbytes_per_sec": 0 00:08:35.707 }, 00:08:35.707 "claimed": false, 00:08:35.707 "zoned": false, 00:08:35.707 "supported_io_types": { 00:08:35.707 "read": true, 00:08:35.707 "write": true, 00:08:35.707 "unmap": true, 00:08:35.707 "flush": true, 00:08:35.707 "reset": true, 00:08:35.707 "nvme_admin": false, 00:08:35.707 "nvme_io": false, 00:08:35.707 "nvme_io_md": false, 00:08:35.707 "write_zeroes": true, 00:08:35.707 "zcopy": false, 00:08:35.707 "get_zone_info": false, 00:08:35.707 "zone_management": false, 00:08:35.707 "zone_append": false, 00:08:35.707 "compare": false, 00:08:35.707 "compare_and_write": false, 00:08:35.707 "abort": false, 00:08:35.707 "seek_hole": false, 00:08:35.707 "seek_data": false, 00:08:35.707 "copy": false, 00:08:35.707 "nvme_iov_md": false 00:08:35.707 }, 00:08:35.707 "memory_domains": [ 00:08:35.707 { 00:08:35.707 "dma_device_id": "system", 00:08:35.707 "dma_device_type": 1 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.707 "dma_device_type": 2 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "dma_device_id": "system", 00:08:35.707 "dma_device_type": 1 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.707 "dma_device_type": 2 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "dma_device_id": "system", 00:08:35.707 "dma_device_type": 1 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.707 "dma_device_type": 2 00:08:35.707 } 00:08:35.707 ], 00:08:35.707 "driver_specific": { 00:08:35.707 "raid": { 00:08:35.707 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:35.707 "strip_size_kb": 64, 00:08:35.707 "state": "online", 00:08:35.707 "raid_level": "raid0", 00:08:35.707 "superblock": true, 00:08:35.707 "num_base_bdevs": 3, 00:08:35.707 "num_base_bdevs_discovered": 3, 00:08:35.707 "num_base_bdevs_operational": 3, 00:08:35.707 "base_bdevs_list": [ 00:08:35.707 { 00:08:35.707 "name": "pt1", 00:08:35.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.707 "is_configured": true, 00:08:35.707 "data_offset": 2048, 00:08:35.707 "data_size": 63488 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "name": "pt2", 00:08:35.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.707 "is_configured": true, 00:08:35.707 "data_offset": 2048, 00:08:35.707 "data_size": 63488 00:08:35.707 }, 00:08:35.707 { 00:08:35.707 "name": "pt3", 00:08:35.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.707 "is_configured": true, 00:08:35.707 "data_offset": 2048, 00:08:35.707 "data_size": 63488 00:08:35.707 } 00:08:35.707 ] 00:08:35.707 } 00:08:35.707 } 00:08:35.707 }' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.707 pt2 00:08:35.707 pt3' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:35.707 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.707 [2024-12-12 09:22:09.709431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2 ']' 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 [2024-12-12 09:22:09.757089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.967 [2024-12-12 09:22:09.757162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.967 [2024-12-12 09:22:09.757269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.967 [2024-12-12 09:22:09.757356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.967 [2024-12-12 09:22:09.757396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.967 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 [2024-12-12 09:22:09.905254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:35.967 [2024-12-12 09:22:09.907538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:35.967 [2024-12-12 09:22:09.907643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:35.967 [2024-12-12 09:22:09.907725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:35.967 [2024-12-12 09:22:09.907839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:35.967 [2024-12-12 09:22:09.907914] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:35.967 [2024-12-12 09:22:09.907976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.967 [2024-12-12 09:22:09.908009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:35.967 request: 00:08:35.967 { 00:08:35.967 "name": "raid_bdev1", 00:08:35.967 "raid_level": "raid0", 00:08:35.968 "base_bdevs": [ 00:08:35.968 "malloc1", 00:08:35.968 "malloc2", 00:08:35.968 "malloc3" 00:08:35.968 ], 00:08:35.968 "strip_size_kb": 64, 00:08:35.968 "superblock": false, 00:08:35.968 "method": "bdev_raid_create", 00:08:35.968 "req_id": 1 00:08:35.968 } 00:08:35.968 Got JSON-RPC error response 00:08:35.968 response: 00:08:35.968 { 00:08:35.968 "code": -17, 00:08:35.968 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:35.968 } 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.968 [2024-12-12 09:22:09.972855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.968 [2024-12-12 09:22:09.972993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.968 [2024-12-12 09:22:09.973035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:35.968 [2024-12-12 09:22:09.973075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.968 [2024-12-12 09:22:09.975684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.968 [2024-12-12 09:22:09.975788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.968 [2024-12-12 09:22:09.975921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:35.968 [2024-12-12 09:22:09.976015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.968 pt1 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.968 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.227 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.227 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.227 "name": "raid_bdev1", 00:08:36.227 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:36.227 "strip_size_kb": 64, 00:08:36.227 "state": "configuring", 00:08:36.227 "raid_level": "raid0", 00:08:36.227 "superblock": true, 00:08:36.227 "num_base_bdevs": 3, 00:08:36.227 "num_base_bdevs_discovered": 1, 00:08:36.227 "num_base_bdevs_operational": 3, 00:08:36.227 "base_bdevs_list": [ 00:08:36.227 { 00:08:36.227 "name": "pt1", 00:08:36.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.227 "is_configured": true, 00:08:36.227 "data_offset": 2048, 00:08:36.227 "data_size": 63488 00:08:36.227 }, 00:08:36.227 { 00:08:36.227 "name": null, 00:08:36.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.227 "is_configured": false, 00:08:36.227 "data_offset": 2048, 00:08:36.227 "data_size": 63488 00:08:36.227 }, 00:08:36.227 { 00:08:36.227 "name": null, 00:08:36.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.227 "is_configured": false, 00:08:36.227 "data_offset": 2048, 00:08:36.227 "data_size": 63488 00:08:36.227 } 00:08:36.227 ] 00:08:36.227 }' 00:08:36.227 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.227 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.486 [2024-12-12 09:22:10.424115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:36.486 [2024-12-12 09:22:10.424283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.486 [2024-12-12 09:22:10.424332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:36.486 [2024-12-12 09:22:10.424365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.486 [2024-12-12 09:22:10.424919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.486 [2024-12-12 09:22:10.425005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:36.486 [2024-12-12 09:22:10.425140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:36.486 [2024-12-12 09:22:10.425204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:36.486 pt2 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.486 [2024-12-12 09:22:10.436060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.486 "name": "raid_bdev1", 00:08:36.486 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:36.486 "strip_size_kb": 64, 00:08:36.486 "state": "configuring", 00:08:36.486 "raid_level": "raid0", 00:08:36.486 "superblock": true, 00:08:36.486 "num_base_bdevs": 3, 00:08:36.486 "num_base_bdevs_discovered": 1, 00:08:36.486 "num_base_bdevs_operational": 3, 00:08:36.486 "base_bdevs_list": [ 00:08:36.486 { 00:08:36.486 "name": "pt1", 00:08:36.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.486 "is_configured": true, 00:08:36.486 "data_offset": 2048, 00:08:36.486 "data_size": 63488 00:08:36.486 }, 00:08:36.486 { 00:08:36.486 "name": null, 00:08:36.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.486 "is_configured": false, 00:08:36.486 "data_offset": 0, 00:08:36.486 "data_size": 63488 00:08:36.486 }, 00:08:36.486 { 00:08:36.486 "name": null, 00:08:36.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.486 "is_configured": false, 00:08:36.486 "data_offset": 2048, 00:08:36.486 "data_size": 63488 00:08:36.486 } 00:08:36.486 ] 00:08:36.486 }' 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.486 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.054 [2024-12-12 09:22:10.839353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.054 [2024-12-12 09:22:10.839507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.054 [2024-12-12 09:22:10.839544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:37.054 [2024-12-12 09:22:10.839577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.054 [2024-12-12 09:22:10.840172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.054 [2024-12-12 09:22:10.840238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.054 [2024-12-12 09:22:10.840361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.054 [2024-12-12 09:22:10.840415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.054 pt2 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.054 [2024-12-12 09:22:10.855276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:37.054 [2024-12-12 09:22:10.855362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.054 [2024-12-12 09:22:10.855391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:37.054 [2024-12-12 09:22:10.855421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.054 [2024-12-12 09:22:10.855869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.054 [2024-12-12 09:22:10.855931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:37.054 [2024-12-12 09:22:10.856027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:37.054 [2024-12-12 09:22:10.856076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:37.054 [2024-12-12 09:22:10.856220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.054 [2024-12-12 09:22:10.856259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.054 [2024-12-12 09:22:10.856531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:37.054 [2024-12-12 09:22:10.856715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.054 [2024-12-12 09:22:10.856750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:37.054 [2024-12-12 09:22:10.856916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.054 pt3 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.054 "name": "raid_bdev1", 00:08:37.054 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:37.054 "strip_size_kb": 64, 00:08:37.054 "state": "online", 00:08:37.054 "raid_level": "raid0", 00:08:37.054 "superblock": true, 00:08:37.054 "num_base_bdevs": 3, 00:08:37.054 "num_base_bdevs_discovered": 3, 00:08:37.054 "num_base_bdevs_operational": 3, 00:08:37.054 "base_bdevs_list": [ 00:08:37.054 { 00:08:37.054 "name": "pt1", 00:08:37.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.054 "is_configured": true, 00:08:37.054 "data_offset": 2048, 00:08:37.054 "data_size": 63488 00:08:37.054 }, 00:08:37.054 { 00:08:37.054 "name": "pt2", 00:08:37.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.054 "is_configured": true, 00:08:37.054 "data_offset": 2048, 00:08:37.054 "data_size": 63488 00:08:37.054 }, 00:08:37.054 { 00:08:37.054 "name": "pt3", 00:08:37.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.054 "is_configured": true, 00:08:37.054 "data_offset": 2048, 00:08:37.054 "data_size": 63488 00:08:37.054 } 00:08:37.054 ] 00:08:37.054 }' 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.054 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.313 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.314 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.314 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.314 [2024-12-12 09:22:11.298909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.314 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.314 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.314 "name": "raid_bdev1", 00:08:37.314 "aliases": [ 00:08:37.314 "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2" 00:08:37.314 ], 00:08:37.314 "product_name": "Raid Volume", 00:08:37.314 "block_size": 512, 00:08:37.314 "num_blocks": 190464, 00:08:37.314 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:37.314 "assigned_rate_limits": { 00:08:37.314 "rw_ios_per_sec": 0, 00:08:37.314 "rw_mbytes_per_sec": 0, 00:08:37.314 "r_mbytes_per_sec": 0, 00:08:37.314 "w_mbytes_per_sec": 0 00:08:37.314 }, 00:08:37.314 "claimed": false, 00:08:37.314 "zoned": false, 00:08:37.314 "supported_io_types": { 00:08:37.314 "read": true, 00:08:37.314 "write": true, 00:08:37.314 "unmap": true, 00:08:37.314 "flush": true, 00:08:37.314 "reset": true, 00:08:37.314 "nvme_admin": false, 00:08:37.314 "nvme_io": false, 00:08:37.314 "nvme_io_md": false, 00:08:37.314 "write_zeroes": true, 00:08:37.314 "zcopy": false, 00:08:37.314 "get_zone_info": false, 00:08:37.314 "zone_management": false, 00:08:37.314 "zone_append": false, 00:08:37.314 "compare": false, 00:08:37.314 "compare_and_write": false, 00:08:37.314 "abort": false, 00:08:37.314 "seek_hole": false, 00:08:37.314 "seek_data": false, 00:08:37.314 "copy": false, 00:08:37.314 "nvme_iov_md": false 00:08:37.314 }, 00:08:37.314 "memory_domains": [ 00:08:37.314 { 00:08:37.314 "dma_device_id": "system", 00:08:37.314 "dma_device_type": 1 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.314 "dma_device_type": 2 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "dma_device_id": "system", 00:08:37.314 "dma_device_type": 1 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.314 "dma_device_type": 2 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "dma_device_id": "system", 00:08:37.314 "dma_device_type": 1 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.314 "dma_device_type": 2 00:08:37.314 } 00:08:37.314 ], 00:08:37.314 "driver_specific": { 00:08:37.314 "raid": { 00:08:37.314 "uuid": "c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2", 00:08:37.314 "strip_size_kb": 64, 00:08:37.314 "state": "online", 00:08:37.314 "raid_level": "raid0", 00:08:37.314 "superblock": true, 00:08:37.314 "num_base_bdevs": 3, 00:08:37.314 "num_base_bdevs_discovered": 3, 00:08:37.314 "num_base_bdevs_operational": 3, 00:08:37.314 "base_bdevs_list": [ 00:08:37.314 { 00:08:37.314 "name": "pt1", 00:08:37.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.314 "is_configured": true, 00:08:37.314 "data_offset": 2048, 00:08:37.314 "data_size": 63488 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "name": "pt2", 00:08:37.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.314 "is_configured": true, 00:08:37.314 "data_offset": 2048, 00:08:37.314 "data_size": 63488 00:08:37.314 }, 00:08:37.314 { 00:08:37.314 "name": "pt3", 00:08:37.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.314 "is_configured": true, 00:08:37.314 "data_offset": 2048, 00:08:37.314 "data_size": 63488 00:08:37.314 } 00:08:37.314 ] 00:08:37.314 } 00:08:37.314 } 00:08:37.314 }' 00:08:37.572 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.572 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:37.573 pt2 00:08:37.573 pt3' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:37.573 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.573 [2024-12-12 09:22:11.590362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2 '!=' c0f274fb-3e5b-4bbf-a492-7db18e0b6ee2 ']' 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66206 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66206 ']' 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66206 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66206 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.832 killing process with pid 66206 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66206' 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66206 00:08:37.832 [2024-12-12 09:22:11.668352] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.832 [2024-12-12 09:22:11.668484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.832 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66206 00:08:37.832 [2024-12-12 09:22:11.668556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.832 [2024-12-12 09:22:11.668570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:38.090 [2024-12-12 09:22:11.987075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.467 09:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:39.467 ************************************ 00:08:39.467 END TEST raid_superblock_test 00:08:39.467 ************************************ 00:08:39.467 00:08:39.467 real 0m5.359s 00:08:39.467 user 0m7.534s 00:08:39.467 sys 0m1.001s 00:08:39.467 09:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.467 09:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.467 09:22:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:39.467 09:22:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.467 09:22:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.467 09:22:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.467 ************************************ 00:08:39.467 START TEST raid_read_error_test 00:08:39.467 ************************************ 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QlgwJPBhln 00:08:39.467 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66459 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66459 00:08:39.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66459 ']' 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.468 09:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.468 [2024-12-12 09:22:13.379314] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:39.468 [2024-12-12 09:22:13.379421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66459 ] 00:08:39.727 [2024-12-12 09:22:13.551824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.727 [2024-12-12 09:22:13.682836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.986 [2024-12-12 09:22:13.913851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.986 [2024-12-12 09:22:13.913901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.245 BaseBdev1_malloc 00:08:40.245 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.246 true 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.246 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 [2024-12-12 09:22:14.269865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.505 [2024-12-12 09:22:14.270028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.505 [2024-12-12 09:22:14.270067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:40.505 [2024-12-12 09:22:14.270096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.505 [2024-12-12 09:22:14.272455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.505 [2024-12-12 09:22:14.272540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.505 BaseBdev1 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 BaseBdev2_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 true 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 [2024-12-12 09:22:14.344055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.505 [2024-12-12 09:22:14.344119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.505 [2024-12-12 09:22:14.344136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.505 [2024-12-12 09:22:14.344148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.505 [2024-12-12 09:22:14.346520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.505 [2024-12-12 09:22:14.346560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.505 BaseBdev2 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 BaseBdev3_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.505 true 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.505 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.506 [2024-12-12 09:22:14.432104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:40.506 [2024-12-12 09:22:14.432159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.506 [2024-12-12 09:22:14.432176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:40.506 [2024-12-12 09:22:14.432186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.506 [2024-12-12 09:22:14.434451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.506 [2024-12-12 09:22:14.434489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:40.506 BaseBdev3 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.506 [2024-12-12 09:22:14.444168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.506 [2024-12-12 09:22:14.446174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.506 [2024-12-12 09:22:14.446284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.506 [2024-12-12 09:22:14.446517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.506 [2024-12-12 09:22:14.446567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.506 [2024-12-12 09:22:14.446828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:40.506 [2024-12-12 09:22:14.447051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.506 [2024-12-12 09:22:14.447099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:40.506 [2024-12-12 09:22:14.447286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.506 "name": "raid_bdev1", 00:08:40.506 "uuid": "9fd19a10-febb-41eb-85a2-215003c0a1e7", 00:08:40.506 "strip_size_kb": 64, 00:08:40.506 "state": "online", 00:08:40.506 "raid_level": "raid0", 00:08:40.506 "superblock": true, 00:08:40.506 "num_base_bdevs": 3, 00:08:40.506 "num_base_bdevs_discovered": 3, 00:08:40.506 "num_base_bdevs_operational": 3, 00:08:40.506 "base_bdevs_list": [ 00:08:40.506 { 00:08:40.506 "name": "BaseBdev1", 00:08:40.506 "uuid": "a9329f33-e0a4-59ed-a1b8-925703311f5e", 00:08:40.506 "is_configured": true, 00:08:40.506 "data_offset": 2048, 00:08:40.506 "data_size": 63488 00:08:40.506 }, 00:08:40.506 { 00:08:40.506 "name": "BaseBdev2", 00:08:40.506 "uuid": "db7bdce0-a949-5067-8ce2-609493af180d", 00:08:40.506 "is_configured": true, 00:08:40.506 "data_offset": 2048, 00:08:40.506 "data_size": 63488 00:08:40.506 }, 00:08:40.506 { 00:08:40.506 "name": "BaseBdev3", 00:08:40.506 "uuid": "8bbe8ba3-c514-5d3a-881d-b392bfed7927", 00:08:40.506 "is_configured": true, 00:08:40.506 "data_offset": 2048, 00:08:40.506 "data_size": 63488 00:08:40.506 } 00:08:40.506 ] 00:08:40.506 }' 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.506 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.074 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.074 [2024-12-12 09:22:14.952895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.009 "name": "raid_bdev1", 00:08:42.009 "uuid": "9fd19a10-febb-41eb-85a2-215003c0a1e7", 00:08:42.009 "strip_size_kb": 64, 00:08:42.009 "state": "online", 00:08:42.009 "raid_level": "raid0", 00:08:42.009 "superblock": true, 00:08:42.009 "num_base_bdevs": 3, 00:08:42.009 "num_base_bdevs_discovered": 3, 00:08:42.009 "num_base_bdevs_operational": 3, 00:08:42.009 "base_bdevs_list": [ 00:08:42.009 { 00:08:42.009 "name": "BaseBdev1", 00:08:42.009 "uuid": "a9329f33-e0a4-59ed-a1b8-925703311f5e", 00:08:42.009 "is_configured": true, 00:08:42.009 "data_offset": 2048, 00:08:42.009 "data_size": 63488 00:08:42.009 }, 00:08:42.009 { 00:08:42.009 "name": "BaseBdev2", 00:08:42.009 "uuid": "db7bdce0-a949-5067-8ce2-609493af180d", 00:08:42.009 "is_configured": true, 00:08:42.009 "data_offset": 2048, 00:08:42.009 "data_size": 63488 00:08:42.009 }, 00:08:42.009 { 00:08:42.009 "name": "BaseBdev3", 00:08:42.009 "uuid": "8bbe8ba3-c514-5d3a-881d-b392bfed7927", 00:08:42.009 "is_configured": true, 00:08:42.009 "data_offset": 2048, 00:08:42.009 "data_size": 63488 00:08:42.009 } 00:08:42.009 ] 00:08:42.009 }' 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.009 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.577 09:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.577 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.577 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.577 [2024-12-12 09:22:16.306148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.577 [2024-12-12 09:22:16.306266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.577 [2024-12-12 09:22:16.309017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.577 [2024-12-12 09:22:16.309109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.577 [2024-12-12 09:22:16.309171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.577 [2024-12-12 09:22:16.309211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:42.577 { 00:08:42.577 "results": [ 00:08:42.577 { 00:08:42.577 "job": "raid_bdev1", 00:08:42.577 "core_mask": "0x1", 00:08:42.577 "workload": "randrw", 00:08:42.577 "percentage": 50, 00:08:42.577 "status": "finished", 00:08:42.577 "queue_depth": 1, 00:08:42.577 "io_size": 131072, 00:08:42.577 "runtime": 1.353858, 00:08:42.577 "iops": 13615.90358811633, 00:08:42.577 "mibps": 1701.9879485145414, 00:08:42.577 "io_failed": 1, 00:08:42.577 "io_timeout": 0, 00:08:42.577 "avg_latency_us": 103.18184145167193, 00:08:42.577 "min_latency_us": 25.823580786026202, 00:08:42.577 "max_latency_us": 1409.4532751091704 00:08:42.577 } 00:08:42.577 ], 00:08:42.577 "core_count": 1 00:08:42.577 } 00:08:42.577 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66459 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66459 ']' 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66459 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66459 00:08:42.578 killing process with pid 66459 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66459' 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66459 00:08:42.578 [2024-12-12 09:22:16.356383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.578 09:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66459 00:08:42.837 [2024-12-12 09:22:16.612066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QlgwJPBhln 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.216 ************************************ 00:08:44.216 END TEST raid_read_error_test 00:08:44.216 ************************************ 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:44.216 00:08:44.216 real 0m4.655s 00:08:44.216 user 0m5.320s 00:08:44.216 sys 0m0.675s 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.216 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.216 09:22:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:44.216 09:22:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.216 09:22:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.216 09:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.216 ************************************ 00:08:44.216 START TEST raid_write_error_test 00:08:44.216 ************************************ 00:08:44.216 09:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:44.216 09:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:44.216 09:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:44.216 09:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:44.216 09:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cIVQuL3ZZv 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66606 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66606 00:08:44.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 66606 ']' 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.216 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.216 [2024-12-12 09:22:18.102290] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:44.216 [2024-12-12 09:22:18.102402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66606 ] 00:08:44.475 [2024-12-12 09:22:18.277938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.475 [2024-12-12 09:22:18.411922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.733 [2024-12-12 09:22:18.645347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.733 [2024-12-12 09:22:18.645394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.993 BaseBdev1_malloc 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.993 true 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.993 09:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.993 [2024-12-12 09:22:18.996313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.993 [2024-12-12 09:22:18.996462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.993 [2024-12-12 09:22:18.996503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.993 [2024-12-12 09:22:18.996592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.993 [2024-12-12 09:22:18.999116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.993 [2024-12-12 09:22:18.999205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.993 BaseBdev1 00:08:44.993 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.993 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.993 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.993 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.993 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 BaseBdev2_malloc 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 true 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 [2024-12-12 09:22:19.068463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:45.252 [2024-12-12 09:22:19.068598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.252 [2024-12-12 09:22:19.068640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:45.252 [2024-12-12 09:22:19.068720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.252 [2024-12-12 09:22:19.071224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.252 [2024-12-12 09:22:19.071300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:45.252 BaseBdev2 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 BaseBdev3_malloc 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 true 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 [2024-12-12 09:22:19.156438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:45.252 [2024-12-12 09:22:19.156497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.252 [2024-12-12 09:22:19.156515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:45.252 [2024-12-12 09:22:19.156527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.252 [2024-12-12 09:22:19.158932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.252 [2024-12-12 09:22:19.158986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:45.252 BaseBdev3 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 [2024-12-12 09:22:19.168504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.252 [2024-12-12 09:22:19.170746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.252 [2024-12-12 09:22:19.170882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.252 [2024-12-12 09:22:19.171171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:45.252 [2024-12-12 09:22:19.171238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.252 [2024-12-12 09:22:19.171522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:45.252 [2024-12-12 09:22:19.171720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:45.252 [2024-12-12 09:22:19.171791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:45.252 [2024-12-12 09:22:19.172002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.252 "name": "raid_bdev1", 00:08:45.252 "uuid": "9d55a167-683c-45eb-bc16-a18fdc78ffa0", 00:08:45.252 "strip_size_kb": 64, 00:08:45.252 "state": "online", 00:08:45.252 "raid_level": "raid0", 00:08:45.252 "superblock": true, 00:08:45.252 "num_base_bdevs": 3, 00:08:45.252 "num_base_bdevs_discovered": 3, 00:08:45.252 "num_base_bdevs_operational": 3, 00:08:45.252 "base_bdevs_list": [ 00:08:45.252 { 00:08:45.252 "name": "BaseBdev1", 00:08:45.252 "uuid": "e0bdfc41-03a6-58c5-833e-78b0b578aa17", 00:08:45.252 "is_configured": true, 00:08:45.252 "data_offset": 2048, 00:08:45.252 "data_size": 63488 00:08:45.252 }, 00:08:45.252 { 00:08:45.252 "name": "BaseBdev2", 00:08:45.252 "uuid": "c6c4afd2-20ca-588f-ace5-50de6468a808", 00:08:45.252 "is_configured": true, 00:08:45.252 "data_offset": 2048, 00:08:45.252 "data_size": 63488 00:08:45.252 }, 00:08:45.252 { 00:08:45.252 "name": "BaseBdev3", 00:08:45.252 "uuid": "c43cd916-40ab-57a9-8030-900d30211869", 00:08:45.252 "is_configured": true, 00:08:45.252 "data_offset": 2048, 00:08:45.252 "data_size": 63488 00:08:45.252 } 00:08:45.252 ] 00:08:45.252 }' 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.252 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.820 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.820 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.820 [2024-12-12 09:22:19.689007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.772 "name": "raid_bdev1", 00:08:46.772 "uuid": "9d55a167-683c-45eb-bc16-a18fdc78ffa0", 00:08:46.772 "strip_size_kb": 64, 00:08:46.772 "state": "online", 00:08:46.772 "raid_level": "raid0", 00:08:46.772 "superblock": true, 00:08:46.772 "num_base_bdevs": 3, 00:08:46.772 "num_base_bdevs_discovered": 3, 00:08:46.772 "num_base_bdevs_operational": 3, 00:08:46.772 "base_bdevs_list": [ 00:08:46.772 { 00:08:46.772 "name": "BaseBdev1", 00:08:46.772 "uuid": "e0bdfc41-03a6-58c5-833e-78b0b578aa17", 00:08:46.772 "is_configured": true, 00:08:46.772 "data_offset": 2048, 00:08:46.772 "data_size": 63488 00:08:46.772 }, 00:08:46.772 { 00:08:46.772 "name": "BaseBdev2", 00:08:46.772 "uuid": "c6c4afd2-20ca-588f-ace5-50de6468a808", 00:08:46.772 "is_configured": true, 00:08:46.772 "data_offset": 2048, 00:08:46.772 "data_size": 63488 00:08:46.772 }, 00:08:46.772 { 00:08:46.772 "name": "BaseBdev3", 00:08:46.772 "uuid": "c43cd916-40ab-57a9-8030-900d30211869", 00:08:46.772 "is_configured": true, 00:08:46.772 "data_offset": 2048, 00:08:46.772 "data_size": 63488 00:08:46.772 } 00:08:46.772 ] 00:08:46.772 }' 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.772 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.031 09:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.031 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.031 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.031 [2024-12-12 09:22:21.049716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.031 [2024-12-12 09:22:21.049849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.031 [2024-12-12 09:22:21.052729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.031 [2024-12-12 09:22:21.052824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.031 [2024-12-12 09:22:21.052888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.031 [2024-12-12 09:22:21.052936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:47.290 { 00:08:47.290 "results": [ 00:08:47.290 { 00:08:47.290 "job": "raid_bdev1", 00:08:47.290 "core_mask": "0x1", 00:08:47.290 "workload": "randrw", 00:08:47.290 "percentage": 50, 00:08:47.290 "status": "finished", 00:08:47.290 "queue_depth": 1, 00:08:47.290 "io_size": 131072, 00:08:47.290 "runtime": 1.361565, 00:08:47.290 "iops": 13718.77214822649, 00:08:47.290 "mibps": 1714.8465185283112, 00:08:47.290 "io_failed": 1, 00:08:47.290 "io_timeout": 0, 00:08:47.290 "avg_latency_us": 102.27342490859616, 00:08:47.290 "min_latency_us": 25.7117903930131, 00:08:47.290 "max_latency_us": 1502.46288209607 00:08:47.290 } 00:08:47.290 ], 00:08:47.290 "core_count": 1 00:08:47.290 } 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66606 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 66606 ']' 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 66606 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66606 00:08:47.290 killing process with pid 66606 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66606' 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 66606 00:08:47.290 [2024-12-12 09:22:21.080612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.290 09:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 66606 00:08:47.550 [2024-12-12 09:22:21.338946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cIVQuL3ZZv 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:48.927 00:08:48.927 real 0m4.645s 00:08:48.927 user 0m5.367s 00:08:48.927 sys 0m0.634s 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.927 ************************************ 00:08:48.927 END TEST raid_write_error_test 00:08:48.927 ************************************ 00:08:48.927 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.927 09:22:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:48.927 09:22:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:48.927 09:22:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.927 09:22:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.927 09:22:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.927 ************************************ 00:08:48.927 START TEST raid_state_function_test 00:08:48.927 ************************************ 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66755 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66755' 00:08:48.927 Process raid pid: 66755 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66755 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66755 ']' 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.927 09:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.927 [2024-12-12 09:22:22.811984] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:48.927 [2024-12-12 09:22:22.812186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.186 [2024-12-12 09:22:22.986168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.186 [2024-12-12 09:22:23.122079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.446 [2024-12-12 09:22:23.368684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.446 [2024-12-12 09:22:23.368841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.705 [2024-12-12 09:22:23.650607] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.705 [2024-12-12 09:22:23.650678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.705 [2024-12-12 09:22:23.650689] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.705 [2024-12-12 09:22:23.650699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.705 [2024-12-12 09:22:23.650705] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.705 [2024-12-12 09:22:23.650715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.705 "name": "Existed_Raid", 00:08:49.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.705 "strip_size_kb": 64, 00:08:49.705 "state": "configuring", 00:08:49.705 "raid_level": "concat", 00:08:49.705 "superblock": false, 00:08:49.705 "num_base_bdevs": 3, 00:08:49.705 "num_base_bdevs_discovered": 0, 00:08:49.705 "num_base_bdevs_operational": 3, 00:08:49.705 "base_bdevs_list": [ 00:08:49.705 { 00:08:49.705 "name": "BaseBdev1", 00:08:49.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.705 "is_configured": false, 00:08:49.705 "data_offset": 0, 00:08:49.705 "data_size": 0 00:08:49.705 }, 00:08:49.705 { 00:08:49.705 "name": "BaseBdev2", 00:08:49.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.705 "is_configured": false, 00:08:49.705 "data_offset": 0, 00:08:49.705 "data_size": 0 00:08:49.705 }, 00:08:49.705 { 00:08:49.705 "name": "BaseBdev3", 00:08:49.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.705 "is_configured": false, 00:08:49.705 "data_offset": 0, 00:08:49.705 "data_size": 0 00:08:49.705 } 00:08:49.705 ] 00:08:49.705 }' 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.705 09:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 [2024-12-12 09:22:24.050031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.273 [2024-12-12 09:22:24.050080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 [2024-12-12 09:22:24.062001] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.273 [2024-12-12 09:22:24.062051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.273 [2024-12-12 09:22:24.062061] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.273 [2024-12-12 09:22:24.062071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.273 [2024-12-12 09:22:24.062077] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.273 [2024-12-12 09:22:24.062087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 [2024-12-12 09:22:24.117681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.273 BaseBdev1 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 [ 00:08:50.273 { 00:08:50.273 "name": "BaseBdev1", 00:08:50.273 "aliases": [ 00:08:50.273 "a04648e9-df00-444f-b092-ca44ef1aacce" 00:08:50.273 ], 00:08:50.273 "product_name": "Malloc disk", 00:08:50.273 "block_size": 512, 00:08:50.273 "num_blocks": 65536, 00:08:50.273 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:50.273 "assigned_rate_limits": { 00:08:50.273 "rw_ios_per_sec": 0, 00:08:50.273 "rw_mbytes_per_sec": 0, 00:08:50.273 "r_mbytes_per_sec": 0, 00:08:50.273 "w_mbytes_per_sec": 0 00:08:50.273 }, 00:08:50.273 "claimed": true, 00:08:50.273 "claim_type": "exclusive_write", 00:08:50.273 "zoned": false, 00:08:50.273 "supported_io_types": { 00:08:50.273 "read": true, 00:08:50.273 "write": true, 00:08:50.273 "unmap": true, 00:08:50.273 "flush": true, 00:08:50.273 "reset": true, 00:08:50.273 "nvme_admin": false, 00:08:50.273 "nvme_io": false, 00:08:50.273 "nvme_io_md": false, 00:08:50.273 "write_zeroes": true, 00:08:50.273 "zcopy": true, 00:08:50.273 "get_zone_info": false, 00:08:50.273 "zone_management": false, 00:08:50.273 "zone_append": false, 00:08:50.273 "compare": false, 00:08:50.273 "compare_and_write": false, 00:08:50.273 "abort": true, 00:08:50.273 "seek_hole": false, 00:08:50.273 "seek_data": false, 00:08:50.273 "copy": true, 00:08:50.273 "nvme_iov_md": false 00:08:50.273 }, 00:08:50.273 "memory_domains": [ 00:08:50.273 { 00:08:50.273 "dma_device_id": "system", 00:08:50.273 "dma_device_type": 1 00:08:50.273 }, 00:08:50.273 { 00:08:50.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.273 "dma_device_type": 2 00:08:50.273 } 00:08:50.273 ], 00:08:50.273 "driver_specific": {} 00:08:50.273 } 00:08:50.273 ] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.273 "name": "Existed_Raid", 00:08:50.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.273 "strip_size_kb": 64, 00:08:50.273 "state": "configuring", 00:08:50.273 "raid_level": "concat", 00:08:50.273 "superblock": false, 00:08:50.273 "num_base_bdevs": 3, 00:08:50.273 "num_base_bdevs_discovered": 1, 00:08:50.273 "num_base_bdevs_operational": 3, 00:08:50.273 "base_bdevs_list": [ 00:08:50.273 { 00:08:50.273 "name": "BaseBdev1", 00:08:50.273 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:50.273 "is_configured": true, 00:08:50.273 "data_offset": 0, 00:08:50.273 "data_size": 65536 00:08:50.273 }, 00:08:50.273 { 00:08:50.273 "name": "BaseBdev2", 00:08:50.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.273 "is_configured": false, 00:08:50.273 "data_offset": 0, 00:08:50.273 "data_size": 0 00:08:50.273 }, 00:08:50.273 { 00:08:50.273 "name": "BaseBdev3", 00:08:50.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.273 "is_configured": false, 00:08:50.273 "data_offset": 0, 00:08:50.273 "data_size": 0 00:08:50.273 } 00:08:50.273 ] 00:08:50.273 }' 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.273 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 [2024-12-12 09:22:24.545070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.532 [2024-12-12 09:22:24.545136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.532 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.791 [2024-12-12 09:22:24.557083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.791 [2024-12-12 09:22:24.559165] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.791 [2024-12-12 09:22:24.559209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.791 [2024-12-12 09:22:24.559219] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.791 [2024-12-12 09:22:24.559229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.791 "name": "Existed_Raid", 00:08:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.791 "strip_size_kb": 64, 00:08:50.791 "state": "configuring", 00:08:50.791 "raid_level": "concat", 00:08:50.791 "superblock": false, 00:08:50.791 "num_base_bdevs": 3, 00:08:50.791 "num_base_bdevs_discovered": 1, 00:08:50.791 "num_base_bdevs_operational": 3, 00:08:50.791 "base_bdevs_list": [ 00:08:50.791 { 00:08:50.791 "name": "BaseBdev1", 00:08:50.791 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:50.791 "is_configured": true, 00:08:50.791 "data_offset": 0, 00:08:50.791 "data_size": 65536 00:08:50.791 }, 00:08:50.791 { 00:08:50.791 "name": "BaseBdev2", 00:08:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.791 "is_configured": false, 00:08:50.791 "data_offset": 0, 00:08:50.791 "data_size": 0 00:08:50.791 }, 00:08:50.791 { 00:08:50.791 "name": "BaseBdev3", 00:08:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.791 "is_configured": false, 00:08:50.791 "data_offset": 0, 00:08:50.791 "data_size": 0 00:08:50.791 } 00:08:50.791 ] 00:08:50.791 }' 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.791 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.050 [2024-12-12 09:22:25.062034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.050 BaseBdev2 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.050 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.309 [ 00:08:51.309 { 00:08:51.309 "name": "BaseBdev2", 00:08:51.309 "aliases": [ 00:08:51.309 "a66b6fb7-d31b-4438-aed7-f9f6be794a43" 00:08:51.309 ], 00:08:51.309 "product_name": "Malloc disk", 00:08:51.309 "block_size": 512, 00:08:51.309 "num_blocks": 65536, 00:08:51.309 "uuid": "a66b6fb7-d31b-4438-aed7-f9f6be794a43", 00:08:51.309 "assigned_rate_limits": { 00:08:51.309 "rw_ios_per_sec": 0, 00:08:51.309 "rw_mbytes_per_sec": 0, 00:08:51.309 "r_mbytes_per_sec": 0, 00:08:51.309 "w_mbytes_per_sec": 0 00:08:51.309 }, 00:08:51.309 "claimed": true, 00:08:51.309 "claim_type": "exclusive_write", 00:08:51.309 "zoned": false, 00:08:51.309 "supported_io_types": { 00:08:51.309 "read": true, 00:08:51.309 "write": true, 00:08:51.309 "unmap": true, 00:08:51.309 "flush": true, 00:08:51.309 "reset": true, 00:08:51.309 "nvme_admin": false, 00:08:51.309 "nvme_io": false, 00:08:51.309 "nvme_io_md": false, 00:08:51.309 "write_zeroes": true, 00:08:51.309 "zcopy": true, 00:08:51.309 "get_zone_info": false, 00:08:51.309 "zone_management": false, 00:08:51.309 "zone_append": false, 00:08:51.309 "compare": false, 00:08:51.309 "compare_and_write": false, 00:08:51.309 "abort": true, 00:08:51.309 "seek_hole": false, 00:08:51.309 "seek_data": false, 00:08:51.309 "copy": true, 00:08:51.309 "nvme_iov_md": false 00:08:51.309 }, 00:08:51.309 "memory_domains": [ 00:08:51.309 { 00:08:51.309 "dma_device_id": "system", 00:08:51.309 "dma_device_type": 1 00:08:51.309 }, 00:08:51.309 { 00:08:51.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.309 "dma_device_type": 2 00:08:51.309 } 00:08:51.309 ], 00:08:51.309 "driver_specific": {} 00:08:51.309 } 00:08:51.309 ] 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.309 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.309 "name": "Existed_Raid", 00:08:51.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.309 "strip_size_kb": 64, 00:08:51.309 "state": "configuring", 00:08:51.309 "raid_level": "concat", 00:08:51.309 "superblock": false, 00:08:51.309 "num_base_bdevs": 3, 00:08:51.309 "num_base_bdevs_discovered": 2, 00:08:51.309 "num_base_bdevs_operational": 3, 00:08:51.309 "base_bdevs_list": [ 00:08:51.309 { 00:08:51.309 "name": "BaseBdev1", 00:08:51.309 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:51.309 "is_configured": true, 00:08:51.309 "data_offset": 0, 00:08:51.309 "data_size": 65536 00:08:51.309 }, 00:08:51.309 { 00:08:51.309 "name": "BaseBdev2", 00:08:51.309 "uuid": "a66b6fb7-d31b-4438-aed7-f9f6be794a43", 00:08:51.310 "is_configured": true, 00:08:51.310 "data_offset": 0, 00:08:51.310 "data_size": 65536 00:08:51.310 }, 00:08:51.310 { 00:08:51.310 "name": "BaseBdev3", 00:08:51.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.310 "is_configured": false, 00:08:51.310 "data_offset": 0, 00:08:51.310 "data_size": 0 00:08:51.310 } 00:08:51.310 ] 00:08:51.310 }' 00:08:51.310 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.310 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.568 [2024-12-12 09:22:25.588313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.568 [2024-12-12 09:22:25.588371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.568 [2024-12-12 09:22:25.588386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:51.568 [2024-12-12 09:22:25.588677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:51.568 [2024-12-12 09:22:25.588876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.568 [2024-12-12 09:22:25.588890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:51.568 [2024-12-12 09:22:25.589192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.568 BaseBdev3 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:51.568 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.828 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.828 [ 00:08:51.828 { 00:08:51.828 "name": "BaseBdev3", 00:08:51.828 "aliases": [ 00:08:51.828 "bfcb4b81-36d5-4c4a-b144-3bbdf215cfe4" 00:08:51.828 ], 00:08:51.828 "product_name": "Malloc disk", 00:08:51.828 "block_size": 512, 00:08:51.828 "num_blocks": 65536, 00:08:51.828 "uuid": "bfcb4b81-36d5-4c4a-b144-3bbdf215cfe4", 00:08:51.828 "assigned_rate_limits": { 00:08:51.828 "rw_ios_per_sec": 0, 00:08:51.828 "rw_mbytes_per_sec": 0, 00:08:51.828 "r_mbytes_per_sec": 0, 00:08:51.828 "w_mbytes_per_sec": 0 00:08:51.828 }, 00:08:51.828 "claimed": true, 00:08:51.828 "claim_type": "exclusive_write", 00:08:51.828 "zoned": false, 00:08:51.828 "supported_io_types": { 00:08:51.828 "read": true, 00:08:51.828 "write": true, 00:08:51.828 "unmap": true, 00:08:51.828 "flush": true, 00:08:51.828 "reset": true, 00:08:51.828 "nvme_admin": false, 00:08:51.828 "nvme_io": false, 00:08:51.828 "nvme_io_md": false, 00:08:51.828 "write_zeroes": true, 00:08:51.828 "zcopy": true, 00:08:51.829 "get_zone_info": false, 00:08:51.829 "zone_management": false, 00:08:51.829 "zone_append": false, 00:08:51.829 "compare": false, 00:08:51.829 "compare_and_write": false, 00:08:51.829 "abort": true, 00:08:51.829 "seek_hole": false, 00:08:51.829 "seek_data": false, 00:08:51.829 "copy": true, 00:08:51.829 "nvme_iov_md": false 00:08:51.829 }, 00:08:51.829 "memory_domains": [ 00:08:51.829 { 00:08:51.829 "dma_device_id": "system", 00:08:51.829 "dma_device_type": 1 00:08:51.829 }, 00:08:51.829 { 00:08:51.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.829 "dma_device_type": 2 00:08:51.829 } 00:08:51.829 ], 00:08:51.829 "driver_specific": {} 00:08:51.829 } 00:08:51.829 ] 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.829 "name": "Existed_Raid", 00:08:51.829 "uuid": "a27e6898-d41e-4284-be26-39ef9a705bd7", 00:08:51.829 "strip_size_kb": 64, 00:08:51.829 "state": "online", 00:08:51.829 "raid_level": "concat", 00:08:51.829 "superblock": false, 00:08:51.829 "num_base_bdevs": 3, 00:08:51.829 "num_base_bdevs_discovered": 3, 00:08:51.829 "num_base_bdevs_operational": 3, 00:08:51.829 "base_bdevs_list": [ 00:08:51.829 { 00:08:51.829 "name": "BaseBdev1", 00:08:51.829 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:51.829 "is_configured": true, 00:08:51.829 "data_offset": 0, 00:08:51.829 "data_size": 65536 00:08:51.829 }, 00:08:51.829 { 00:08:51.829 "name": "BaseBdev2", 00:08:51.829 "uuid": "a66b6fb7-d31b-4438-aed7-f9f6be794a43", 00:08:51.829 "is_configured": true, 00:08:51.829 "data_offset": 0, 00:08:51.829 "data_size": 65536 00:08:51.829 }, 00:08:51.829 { 00:08:51.829 "name": "BaseBdev3", 00:08:51.829 "uuid": "bfcb4b81-36d5-4c4a-b144-3bbdf215cfe4", 00:08:51.829 "is_configured": true, 00:08:51.829 "data_offset": 0, 00:08:51.829 "data_size": 65536 00:08:51.829 } 00:08:51.829 ] 00:08:51.829 }' 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.829 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.088 [2024-12-12 09:22:26.067844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.088 "name": "Existed_Raid", 00:08:52.088 "aliases": [ 00:08:52.088 "a27e6898-d41e-4284-be26-39ef9a705bd7" 00:08:52.088 ], 00:08:52.088 "product_name": "Raid Volume", 00:08:52.088 "block_size": 512, 00:08:52.088 "num_blocks": 196608, 00:08:52.088 "uuid": "a27e6898-d41e-4284-be26-39ef9a705bd7", 00:08:52.088 "assigned_rate_limits": { 00:08:52.088 "rw_ios_per_sec": 0, 00:08:52.088 "rw_mbytes_per_sec": 0, 00:08:52.088 "r_mbytes_per_sec": 0, 00:08:52.088 "w_mbytes_per_sec": 0 00:08:52.088 }, 00:08:52.088 "claimed": false, 00:08:52.088 "zoned": false, 00:08:52.088 "supported_io_types": { 00:08:52.088 "read": true, 00:08:52.088 "write": true, 00:08:52.088 "unmap": true, 00:08:52.088 "flush": true, 00:08:52.088 "reset": true, 00:08:52.088 "nvme_admin": false, 00:08:52.088 "nvme_io": false, 00:08:52.088 "nvme_io_md": false, 00:08:52.088 "write_zeroes": true, 00:08:52.088 "zcopy": false, 00:08:52.088 "get_zone_info": false, 00:08:52.088 "zone_management": false, 00:08:52.088 "zone_append": false, 00:08:52.088 "compare": false, 00:08:52.088 "compare_and_write": false, 00:08:52.088 "abort": false, 00:08:52.088 "seek_hole": false, 00:08:52.088 "seek_data": false, 00:08:52.088 "copy": false, 00:08:52.088 "nvme_iov_md": false 00:08:52.088 }, 00:08:52.088 "memory_domains": [ 00:08:52.088 { 00:08:52.088 "dma_device_id": "system", 00:08:52.088 "dma_device_type": 1 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.088 "dma_device_type": 2 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "dma_device_id": "system", 00:08:52.088 "dma_device_type": 1 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.088 "dma_device_type": 2 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "dma_device_id": "system", 00:08:52.088 "dma_device_type": 1 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.088 "dma_device_type": 2 00:08:52.088 } 00:08:52.088 ], 00:08:52.088 "driver_specific": { 00:08:52.088 "raid": { 00:08:52.088 "uuid": "a27e6898-d41e-4284-be26-39ef9a705bd7", 00:08:52.088 "strip_size_kb": 64, 00:08:52.088 "state": "online", 00:08:52.088 "raid_level": "concat", 00:08:52.088 "superblock": false, 00:08:52.088 "num_base_bdevs": 3, 00:08:52.088 "num_base_bdevs_discovered": 3, 00:08:52.088 "num_base_bdevs_operational": 3, 00:08:52.088 "base_bdevs_list": [ 00:08:52.088 { 00:08:52.088 "name": "BaseBdev1", 00:08:52.088 "uuid": "a04648e9-df00-444f-b092-ca44ef1aacce", 00:08:52.088 "is_configured": true, 00:08:52.088 "data_offset": 0, 00:08:52.088 "data_size": 65536 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "name": "BaseBdev2", 00:08:52.088 "uuid": "a66b6fb7-d31b-4438-aed7-f9f6be794a43", 00:08:52.088 "is_configured": true, 00:08:52.088 "data_offset": 0, 00:08:52.088 "data_size": 65536 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "name": "BaseBdev3", 00:08:52.088 "uuid": "bfcb4b81-36d5-4c4a-b144-3bbdf215cfe4", 00:08:52.088 "is_configured": true, 00:08:52.088 "data_offset": 0, 00:08:52.088 "data_size": 65536 00:08:52.088 } 00:08:52.088 ] 00:08:52.088 } 00:08:52.088 } 00:08:52.088 }' 00:08:52.088 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:52.348 BaseBdev2 00:08:52.348 BaseBdev3' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.348 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.348 [2024-12-12 09:22:26.307150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.348 [2024-12-12 09:22:26.307179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.348 [2024-12-12 09:22:26.307234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.607 "name": "Existed_Raid", 00:08:52.607 "uuid": "a27e6898-d41e-4284-be26-39ef9a705bd7", 00:08:52.607 "strip_size_kb": 64, 00:08:52.607 "state": "offline", 00:08:52.607 "raid_level": "concat", 00:08:52.607 "superblock": false, 00:08:52.607 "num_base_bdevs": 3, 00:08:52.607 "num_base_bdevs_discovered": 2, 00:08:52.607 "num_base_bdevs_operational": 2, 00:08:52.607 "base_bdevs_list": [ 00:08:52.607 { 00:08:52.607 "name": null, 00:08:52.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.607 "is_configured": false, 00:08:52.607 "data_offset": 0, 00:08:52.607 "data_size": 65536 00:08:52.607 }, 00:08:52.607 { 00:08:52.607 "name": "BaseBdev2", 00:08:52.607 "uuid": "a66b6fb7-d31b-4438-aed7-f9f6be794a43", 00:08:52.607 "is_configured": true, 00:08:52.607 "data_offset": 0, 00:08:52.607 "data_size": 65536 00:08:52.607 }, 00:08:52.607 { 00:08:52.607 "name": "BaseBdev3", 00:08:52.607 "uuid": "bfcb4b81-36d5-4c4a-b144-3bbdf215cfe4", 00:08:52.607 "is_configured": true, 00:08:52.607 "data_offset": 0, 00:08:52.607 "data_size": 65536 00:08:52.607 } 00:08:52.607 ] 00:08:52.607 }' 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.607 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 [2024-12-12 09:22:26.888945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.124 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.124 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.124 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.124 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 [2024-12-12 09:22:27.043150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.125 [2024-12-12 09:22:27.043214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.125 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.384 BaseBdev2 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.384 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 [ 00:08:53.385 { 00:08:53.385 "name": "BaseBdev2", 00:08:53.385 "aliases": [ 00:08:53.385 "179885c6-9ddc-46c0-8074-825bdb17eb6a" 00:08:53.385 ], 00:08:53.385 "product_name": "Malloc disk", 00:08:53.385 "block_size": 512, 00:08:53.385 "num_blocks": 65536, 00:08:53.385 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:53.385 "assigned_rate_limits": { 00:08:53.385 "rw_ios_per_sec": 0, 00:08:53.385 "rw_mbytes_per_sec": 0, 00:08:53.385 "r_mbytes_per_sec": 0, 00:08:53.385 "w_mbytes_per_sec": 0 00:08:53.385 }, 00:08:53.385 "claimed": false, 00:08:53.385 "zoned": false, 00:08:53.385 "supported_io_types": { 00:08:53.385 "read": true, 00:08:53.385 "write": true, 00:08:53.385 "unmap": true, 00:08:53.385 "flush": true, 00:08:53.385 "reset": true, 00:08:53.385 "nvme_admin": false, 00:08:53.385 "nvme_io": false, 00:08:53.385 "nvme_io_md": false, 00:08:53.385 "write_zeroes": true, 00:08:53.385 "zcopy": true, 00:08:53.385 "get_zone_info": false, 00:08:53.385 "zone_management": false, 00:08:53.385 "zone_append": false, 00:08:53.385 "compare": false, 00:08:53.385 "compare_and_write": false, 00:08:53.385 "abort": true, 00:08:53.385 "seek_hole": false, 00:08:53.385 "seek_data": false, 00:08:53.385 "copy": true, 00:08:53.385 "nvme_iov_md": false 00:08:53.385 }, 00:08:53.385 "memory_domains": [ 00:08:53.385 { 00:08:53.385 "dma_device_id": "system", 00:08:53.385 "dma_device_type": 1 00:08:53.385 }, 00:08:53.385 { 00:08:53.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.385 "dma_device_type": 2 00:08:53.385 } 00:08:53.385 ], 00:08:53.385 "driver_specific": {} 00:08:53.385 } 00:08:53.385 ] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 BaseBdev3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 [ 00:08:53.385 { 00:08:53.385 "name": "BaseBdev3", 00:08:53.385 "aliases": [ 00:08:53.385 "a00ae71d-0855-46d1-935a-801f8d1b314c" 00:08:53.385 ], 00:08:53.385 "product_name": "Malloc disk", 00:08:53.385 "block_size": 512, 00:08:53.385 "num_blocks": 65536, 00:08:53.385 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:53.385 "assigned_rate_limits": { 00:08:53.385 "rw_ios_per_sec": 0, 00:08:53.385 "rw_mbytes_per_sec": 0, 00:08:53.385 "r_mbytes_per_sec": 0, 00:08:53.385 "w_mbytes_per_sec": 0 00:08:53.385 }, 00:08:53.385 "claimed": false, 00:08:53.385 "zoned": false, 00:08:53.385 "supported_io_types": { 00:08:53.385 "read": true, 00:08:53.385 "write": true, 00:08:53.385 "unmap": true, 00:08:53.385 "flush": true, 00:08:53.385 "reset": true, 00:08:53.385 "nvme_admin": false, 00:08:53.385 "nvme_io": false, 00:08:53.385 "nvme_io_md": false, 00:08:53.385 "write_zeroes": true, 00:08:53.385 "zcopy": true, 00:08:53.385 "get_zone_info": false, 00:08:53.385 "zone_management": false, 00:08:53.385 "zone_append": false, 00:08:53.385 "compare": false, 00:08:53.385 "compare_and_write": false, 00:08:53.385 "abort": true, 00:08:53.385 "seek_hole": false, 00:08:53.385 "seek_data": false, 00:08:53.385 "copy": true, 00:08:53.385 "nvme_iov_md": false 00:08:53.385 }, 00:08:53.385 "memory_domains": [ 00:08:53.385 { 00:08:53.385 "dma_device_id": "system", 00:08:53.385 "dma_device_type": 1 00:08:53.385 }, 00:08:53.385 { 00:08:53.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.385 "dma_device_type": 2 00:08:53.385 } 00:08:53.385 ], 00:08:53.385 "driver_specific": {} 00:08:53.385 } 00:08:53.385 ] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 [2024-12-12 09:22:27.359351] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.385 [2024-12-12 09:22:27.359410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.385 [2024-12-12 09:22:27.359431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.385 [2024-12-12 09:22:27.361474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.645 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.645 "name": "Existed_Raid", 00:08:53.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.645 "strip_size_kb": 64, 00:08:53.645 "state": "configuring", 00:08:53.645 "raid_level": "concat", 00:08:53.645 "superblock": false, 00:08:53.645 "num_base_bdevs": 3, 00:08:53.645 "num_base_bdevs_discovered": 2, 00:08:53.645 "num_base_bdevs_operational": 3, 00:08:53.645 "base_bdevs_list": [ 00:08:53.645 { 00:08:53.645 "name": "BaseBdev1", 00:08:53.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.645 "is_configured": false, 00:08:53.645 "data_offset": 0, 00:08:53.645 "data_size": 0 00:08:53.645 }, 00:08:53.645 { 00:08:53.645 "name": "BaseBdev2", 00:08:53.645 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:53.645 "is_configured": true, 00:08:53.645 "data_offset": 0, 00:08:53.645 "data_size": 65536 00:08:53.645 }, 00:08:53.645 { 00:08:53.645 "name": "BaseBdev3", 00:08:53.645 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:53.645 "is_configured": true, 00:08:53.645 "data_offset": 0, 00:08:53.645 "data_size": 65536 00:08:53.645 } 00:08:53.645 ] 00:08:53.645 }' 00:08:53.645 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.645 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.905 [2024-12-12 09:22:27.846539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.905 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.905 "name": "Existed_Raid", 00:08:53.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.905 "strip_size_kb": 64, 00:08:53.905 "state": "configuring", 00:08:53.905 "raid_level": "concat", 00:08:53.905 "superblock": false, 00:08:53.905 "num_base_bdevs": 3, 00:08:53.905 "num_base_bdevs_discovered": 1, 00:08:53.905 "num_base_bdevs_operational": 3, 00:08:53.905 "base_bdevs_list": [ 00:08:53.905 { 00:08:53.905 "name": "BaseBdev1", 00:08:53.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.905 "is_configured": false, 00:08:53.905 "data_offset": 0, 00:08:53.905 "data_size": 0 00:08:53.905 }, 00:08:53.905 { 00:08:53.906 "name": null, 00:08:53.906 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:53.906 "is_configured": false, 00:08:53.906 "data_offset": 0, 00:08:53.906 "data_size": 65536 00:08:53.906 }, 00:08:53.906 { 00:08:53.906 "name": "BaseBdev3", 00:08:53.906 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:53.906 "is_configured": true, 00:08:53.906 "data_offset": 0, 00:08:53.906 "data_size": 65536 00:08:53.906 } 00:08:53.906 ] 00:08:53.906 }' 00:08:53.906 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.906 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.474 [2024-12-12 09:22:28.320006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.474 BaseBdev1 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.474 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.474 [ 00:08:54.474 { 00:08:54.474 "name": "BaseBdev1", 00:08:54.474 "aliases": [ 00:08:54.474 "4aa1da05-77f2-48c2-9877-1526f1bc7d72" 00:08:54.474 ], 00:08:54.474 "product_name": "Malloc disk", 00:08:54.474 "block_size": 512, 00:08:54.474 "num_blocks": 65536, 00:08:54.474 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:54.474 "assigned_rate_limits": { 00:08:54.474 "rw_ios_per_sec": 0, 00:08:54.474 "rw_mbytes_per_sec": 0, 00:08:54.474 "r_mbytes_per_sec": 0, 00:08:54.474 "w_mbytes_per_sec": 0 00:08:54.474 }, 00:08:54.474 "claimed": true, 00:08:54.474 "claim_type": "exclusive_write", 00:08:54.474 "zoned": false, 00:08:54.474 "supported_io_types": { 00:08:54.474 "read": true, 00:08:54.474 "write": true, 00:08:54.474 "unmap": true, 00:08:54.474 "flush": true, 00:08:54.474 "reset": true, 00:08:54.474 "nvme_admin": false, 00:08:54.474 "nvme_io": false, 00:08:54.474 "nvme_io_md": false, 00:08:54.474 "write_zeroes": true, 00:08:54.474 "zcopy": true, 00:08:54.474 "get_zone_info": false, 00:08:54.474 "zone_management": false, 00:08:54.474 "zone_append": false, 00:08:54.474 "compare": false, 00:08:54.474 "compare_and_write": false, 00:08:54.474 "abort": true, 00:08:54.474 "seek_hole": false, 00:08:54.474 "seek_data": false, 00:08:54.474 "copy": true, 00:08:54.474 "nvme_iov_md": false 00:08:54.474 }, 00:08:54.474 "memory_domains": [ 00:08:54.474 { 00:08:54.474 "dma_device_id": "system", 00:08:54.474 "dma_device_type": 1 00:08:54.474 }, 00:08:54.474 { 00:08:54.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.474 "dma_device_type": 2 00:08:54.474 } 00:08:54.474 ], 00:08:54.474 "driver_specific": {} 00:08:54.474 } 00:08:54.474 ] 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.475 "name": "Existed_Raid", 00:08:54.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.475 "strip_size_kb": 64, 00:08:54.475 "state": "configuring", 00:08:54.475 "raid_level": "concat", 00:08:54.475 "superblock": false, 00:08:54.475 "num_base_bdevs": 3, 00:08:54.475 "num_base_bdevs_discovered": 2, 00:08:54.475 "num_base_bdevs_operational": 3, 00:08:54.475 "base_bdevs_list": [ 00:08:54.475 { 00:08:54.475 "name": "BaseBdev1", 00:08:54.475 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:54.475 "is_configured": true, 00:08:54.475 "data_offset": 0, 00:08:54.475 "data_size": 65536 00:08:54.475 }, 00:08:54.475 { 00:08:54.475 "name": null, 00:08:54.475 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:54.475 "is_configured": false, 00:08:54.475 "data_offset": 0, 00:08:54.475 "data_size": 65536 00:08:54.475 }, 00:08:54.475 { 00:08:54.475 "name": "BaseBdev3", 00:08:54.475 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:54.475 "is_configured": true, 00:08:54.475 "data_offset": 0, 00:08:54.475 "data_size": 65536 00:08:54.475 } 00:08:54.475 ] 00:08:54.475 }' 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.475 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 [2024-12-12 09:22:28.859127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.043 "name": "Existed_Raid", 00:08:55.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.043 "strip_size_kb": 64, 00:08:55.043 "state": "configuring", 00:08:55.043 "raid_level": "concat", 00:08:55.043 "superblock": false, 00:08:55.043 "num_base_bdevs": 3, 00:08:55.043 "num_base_bdevs_discovered": 1, 00:08:55.043 "num_base_bdevs_operational": 3, 00:08:55.043 "base_bdevs_list": [ 00:08:55.043 { 00:08:55.043 "name": "BaseBdev1", 00:08:55.043 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:55.043 "is_configured": true, 00:08:55.043 "data_offset": 0, 00:08:55.043 "data_size": 65536 00:08:55.043 }, 00:08:55.043 { 00:08:55.043 "name": null, 00:08:55.043 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:55.043 "is_configured": false, 00:08:55.043 "data_offset": 0, 00:08:55.043 "data_size": 65536 00:08:55.043 }, 00:08:55.043 { 00:08:55.043 "name": null, 00:08:55.043 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:55.043 "is_configured": false, 00:08:55.043 "data_offset": 0, 00:08:55.043 "data_size": 65536 00:08:55.043 } 00:08:55.043 ] 00:08:55.043 }' 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.043 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.302 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.302 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.302 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.561 [2024-12-12 09:22:29.346343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.561 "name": "Existed_Raid", 00:08:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.561 "strip_size_kb": 64, 00:08:55.561 "state": "configuring", 00:08:55.561 "raid_level": "concat", 00:08:55.561 "superblock": false, 00:08:55.561 "num_base_bdevs": 3, 00:08:55.561 "num_base_bdevs_discovered": 2, 00:08:55.561 "num_base_bdevs_operational": 3, 00:08:55.561 "base_bdevs_list": [ 00:08:55.561 { 00:08:55.561 "name": "BaseBdev1", 00:08:55.561 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:55.561 "is_configured": true, 00:08:55.561 "data_offset": 0, 00:08:55.561 "data_size": 65536 00:08:55.561 }, 00:08:55.561 { 00:08:55.561 "name": null, 00:08:55.561 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:55.561 "is_configured": false, 00:08:55.561 "data_offset": 0, 00:08:55.561 "data_size": 65536 00:08:55.561 }, 00:08:55.561 { 00:08:55.561 "name": "BaseBdev3", 00:08:55.561 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:55.561 "is_configured": true, 00:08:55.561 "data_offset": 0, 00:08:55.561 "data_size": 65536 00:08:55.561 } 00:08:55.561 ] 00:08:55.561 }' 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.561 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.821 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.821 [2024-12-12 09:22:29.833562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.079 "name": "Existed_Raid", 00:08:56.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.079 "strip_size_kb": 64, 00:08:56.079 "state": "configuring", 00:08:56.079 "raid_level": "concat", 00:08:56.079 "superblock": false, 00:08:56.079 "num_base_bdevs": 3, 00:08:56.079 "num_base_bdevs_discovered": 1, 00:08:56.079 "num_base_bdevs_operational": 3, 00:08:56.079 "base_bdevs_list": [ 00:08:56.079 { 00:08:56.079 "name": null, 00:08:56.079 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:56.079 "is_configured": false, 00:08:56.079 "data_offset": 0, 00:08:56.079 "data_size": 65536 00:08:56.079 }, 00:08:56.079 { 00:08:56.079 "name": null, 00:08:56.079 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:56.079 "is_configured": false, 00:08:56.079 "data_offset": 0, 00:08:56.079 "data_size": 65536 00:08:56.079 }, 00:08:56.079 { 00:08:56.079 "name": "BaseBdev3", 00:08:56.079 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:56.079 "is_configured": true, 00:08:56.079 "data_offset": 0, 00:08:56.079 "data_size": 65536 00:08:56.079 } 00:08:56.079 ] 00:08:56.079 }' 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.079 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 [2024-12-12 09:22:30.428347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.645 "name": "Existed_Raid", 00:08:56.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.645 "strip_size_kb": 64, 00:08:56.645 "state": "configuring", 00:08:56.645 "raid_level": "concat", 00:08:56.645 "superblock": false, 00:08:56.645 "num_base_bdevs": 3, 00:08:56.645 "num_base_bdevs_discovered": 2, 00:08:56.645 "num_base_bdevs_operational": 3, 00:08:56.645 "base_bdevs_list": [ 00:08:56.645 { 00:08:56.645 "name": null, 00:08:56.645 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:56.645 "is_configured": false, 00:08:56.645 "data_offset": 0, 00:08:56.645 "data_size": 65536 00:08:56.645 }, 00:08:56.645 { 00:08:56.645 "name": "BaseBdev2", 00:08:56.645 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:56.645 "is_configured": true, 00:08:56.645 "data_offset": 0, 00:08:56.645 "data_size": 65536 00:08:56.645 }, 00:08:56.645 { 00:08:56.645 "name": "BaseBdev3", 00:08:56.645 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:56.645 "is_configured": true, 00:08:56.645 "data_offset": 0, 00:08:56.645 "data_size": 65536 00:08:56.645 } 00:08:56.645 ] 00:08:56.645 }' 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.645 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aa1da05-77f2-48c2-9877-1526f1bc7d72 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 [2024-12-12 09:22:30.953020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:57.163 [2024-12-12 09:22:30.953070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:57.163 [2024-12-12 09:22:30.953080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:57.163 [2024-12-12 09:22:30.953361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:57.163 [2024-12-12 09:22:30.953540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:57.163 [2024-12-12 09:22:30.953550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:57.163 [2024-12-12 09:22:30.953836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.163 NewBaseBdev 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 [ 00:08:57.163 { 00:08:57.164 "name": "NewBaseBdev", 00:08:57.164 "aliases": [ 00:08:57.164 "4aa1da05-77f2-48c2-9877-1526f1bc7d72" 00:08:57.164 ], 00:08:57.164 "product_name": "Malloc disk", 00:08:57.164 "block_size": 512, 00:08:57.164 "num_blocks": 65536, 00:08:57.164 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:57.164 "assigned_rate_limits": { 00:08:57.164 "rw_ios_per_sec": 0, 00:08:57.164 "rw_mbytes_per_sec": 0, 00:08:57.164 "r_mbytes_per_sec": 0, 00:08:57.164 "w_mbytes_per_sec": 0 00:08:57.164 }, 00:08:57.164 "claimed": true, 00:08:57.164 "claim_type": "exclusive_write", 00:08:57.164 "zoned": false, 00:08:57.164 "supported_io_types": { 00:08:57.164 "read": true, 00:08:57.164 "write": true, 00:08:57.164 "unmap": true, 00:08:57.164 "flush": true, 00:08:57.164 "reset": true, 00:08:57.164 "nvme_admin": false, 00:08:57.164 "nvme_io": false, 00:08:57.164 "nvme_io_md": false, 00:08:57.164 "write_zeroes": true, 00:08:57.164 "zcopy": true, 00:08:57.164 "get_zone_info": false, 00:08:57.164 "zone_management": false, 00:08:57.164 "zone_append": false, 00:08:57.164 "compare": false, 00:08:57.164 "compare_and_write": false, 00:08:57.164 "abort": true, 00:08:57.164 "seek_hole": false, 00:08:57.164 "seek_data": false, 00:08:57.164 "copy": true, 00:08:57.164 "nvme_iov_md": false 00:08:57.164 }, 00:08:57.164 "memory_domains": [ 00:08:57.164 { 00:08:57.164 "dma_device_id": "system", 00:08:57.164 "dma_device_type": 1 00:08:57.164 }, 00:08:57.164 { 00:08:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.164 "dma_device_type": 2 00:08:57.164 } 00:08:57.164 ], 00:08:57.164 "driver_specific": {} 00:08:57.164 } 00:08:57.164 ] 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.164 "name": "Existed_Raid", 00:08:57.164 "uuid": "eeabe290-49e3-4303-bd60-d174ef96624e", 00:08:57.164 "strip_size_kb": 64, 00:08:57.164 "state": "online", 00:08:57.164 "raid_level": "concat", 00:08:57.164 "superblock": false, 00:08:57.164 "num_base_bdevs": 3, 00:08:57.164 "num_base_bdevs_discovered": 3, 00:08:57.164 "num_base_bdevs_operational": 3, 00:08:57.164 "base_bdevs_list": [ 00:08:57.164 { 00:08:57.164 "name": "NewBaseBdev", 00:08:57.164 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:57.164 "is_configured": true, 00:08:57.164 "data_offset": 0, 00:08:57.164 "data_size": 65536 00:08:57.164 }, 00:08:57.164 { 00:08:57.164 "name": "BaseBdev2", 00:08:57.164 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:57.164 "is_configured": true, 00:08:57.164 "data_offset": 0, 00:08:57.164 "data_size": 65536 00:08:57.164 }, 00:08:57.164 { 00:08:57.164 "name": "BaseBdev3", 00:08:57.164 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:57.164 "is_configured": true, 00:08:57.164 "data_offset": 0, 00:08:57.164 "data_size": 65536 00:08:57.164 } 00:08:57.164 ] 00:08:57.164 }' 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.164 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.423 [2024-12-12 09:22:31.408534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.423 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.682 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.682 "name": "Existed_Raid", 00:08:57.682 "aliases": [ 00:08:57.682 "eeabe290-49e3-4303-bd60-d174ef96624e" 00:08:57.682 ], 00:08:57.682 "product_name": "Raid Volume", 00:08:57.682 "block_size": 512, 00:08:57.682 "num_blocks": 196608, 00:08:57.682 "uuid": "eeabe290-49e3-4303-bd60-d174ef96624e", 00:08:57.682 "assigned_rate_limits": { 00:08:57.682 "rw_ios_per_sec": 0, 00:08:57.682 "rw_mbytes_per_sec": 0, 00:08:57.682 "r_mbytes_per_sec": 0, 00:08:57.682 "w_mbytes_per_sec": 0 00:08:57.682 }, 00:08:57.682 "claimed": false, 00:08:57.682 "zoned": false, 00:08:57.682 "supported_io_types": { 00:08:57.682 "read": true, 00:08:57.682 "write": true, 00:08:57.682 "unmap": true, 00:08:57.682 "flush": true, 00:08:57.682 "reset": true, 00:08:57.682 "nvme_admin": false, 00:08:57.682 "nvme_io": false, 00:08:57.682 "nvme_io_md": false, 00:08:57.682 "write_zeroes": true, 00:08:57.682 "zcopy": false, 00:08:57.682 "get_zone_info": false, 00:08:57.682 "zone_management": false, 00:08:57.682 "zone_append": false, 00:08:57.682 "compare": false, 00:08:57.682 "compare_and_write": false, 00:08:57.682 "abort": false, 00:08:57.682 "seek_hole": false, 00:08:57.682 "seek_data": false, 00:08:57.682 "copy": false, 00:08:57.682 "nvme_iov_md": false 00:08:57.682 }, 00:08:57.682 "memory_domains": [ 00:08:57.682 { 00:08:57.682 "dma_device_id": "system", 00:08:57.682 "dma_device_type": 1 00:08:57.682 }, 00:08:57.682 { 00:08:57.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.682 "dma_device_type": 2 00:08:57.682 }, 00:08:57.682 { 00:08:57.682 "dma_device_id": "system", 00:08:57.682 "dma_device_type": 1 00:08:57.682 }, 00:08:57.682 { 00:08:57.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.682 "dma_device_type": 2 00:08:57.682 }, 00:08:57.682 { 00:08:57.682 "dma_device_id": "system", 00:08:57.682 "dma_device_type": 1 00:08:57.682 }, 00:08:57.682 { 00:08:57.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.682 "dma_device_type": 2 00:08:57.682 } 00:08:57.682 ], 00:08:57.682 "driver_specific": { 00:08:57.682 "raid": { 00:08:57.682 "uuid": "eeabe290-49e3-4303-bd60-d174ef96624e", 00:08:57.682 "strip_size_kb": 64, 00:08:57.682 "state": "online", 00:08:57.682 "raid_level": "concat", 00:08:57.682 "superblock": false, 00:08:57.682 "num_base_bdevs": 3, 00:08:57.682 "num_base_bdevs_discovered": 3, 00:08:57.682 "num_base_bdevs_operational": 3, 00:08:57.682 "base_bdevs_list": [ 00:08:57.682 { 00:08:57.683 "name": "NewBaseBdev", 00:08:57.683 "uuid": "4aa1da05-77f2-48c2-9877-1526f1bc7d72", 00:08:57.683 "is_configured": true, 00:08:57.683 "data_offset": 0, 00:08:57.683 "data_size": 65536 00:08:57.683 }, 00:08:57.683 { 00:08:57.683 "name": "BaseBdev2", 00:08:57.683 "uuid": "179885c6-9ddc-46c0-8074-825bdb17eb6a", 00:08:57.683 "is_configured": true, 00:08:57.683 "data_offset": 0, 00:08:57.683 "data_size": 65536 00:08:57.683 }, 00:08:57.683 { 00:08:57.683 "name": "BaseBdev3", 00:08:57.683 "uuid": "a00ae71d-0855-46d1-935a-801f8d1b314c", 00:08:57.683 "is_configured": true, 00:08:57.683 "data_offset": 0, 00:08:57.683 "data_size": 65536 00:08:57.683 } 00:08:57.683 ] 00:08:57.683 } 00:08:57.683 } 00:08:57.683 }' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:57.683 BaseBdev2 00:08:57.683 BaseBdev3' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 [2024-12-12 09:22:31.675794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.683 [2024-12-12 09:22:31.675823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.683 [2024-12-12 09:22:31.675901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.683 [2024-12-12 09:22:31.675961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.683 [2024-12-12 09:22:31.675984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66755 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66755 ']' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66755 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.683 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66755 00:08:57.942 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.942 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.942 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66755' 00:08:57.942 killing process with pid 66755 00:08:57.942 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66755 00:08:57.942 [2024-12-12 09:22:31.723393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.942 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66755 00:08:58.201 [2024-12-12 09:22:32.048264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.580 00:08:59.580 real 0m10.522s 00:08:59.580 user 0m16.430s 00:08:59.580 sys 0m1.923s 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.580 ************************************ 00:08:59.580 END TEST raid_state_function_test 00:08:59.580 ************************************ 00:08:59.580 09:22:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:59.580 09:22:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.580 09:22:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.580 09:22:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.580 ************************************ 00:08:59.580 START TEST raid_state_function_test_sb 00:08:59.580 ************************************ 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67376 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67376' 00:08:59.580 Process raid pid: 67376 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67376 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67376 ']' 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.580 09:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.580 [2024-12-12 09:22:33.405960] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:59.580 [2024-12-12 09:22:33.406093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.580 [2024-12-12 09:22:33.578235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.839 [2024-12-12 09:22:33.710676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.098 [2024-12-12 09:22:33.944810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.098 [2024-12-12 09:22:33.944851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.358 [2024-12-12 09:22:34.217603] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.358 [2024-12-12 09:22:34.217667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.358 [2024-12-12 09:22:34.217678] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.358 [2024-12-12 09:22:34.217688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.358 [2024-12-12 09:22:34.217694] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.358 [2024-12-12 09:22:34.217703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.358 "name": "Existed_Raid", 00:09:00.358 "uuid": "0f3ba8a3-5e45-4da4-b48e-42a536390900", 00:09:00.358 "strip_size_kb": 64, 00:09:00.358 "state": "configuring", 00:09:00.358 "raid_level": "concat", 00:09:00.358 "superblock": true, 00:09:00.358 "num_base_bdevs": 3, 00:09:00.358 "num_base_bdevs_discovered": 0, 00:09:00.358 "num_base_bdevs_operational": 3, 00:09:00.358 "base_bdevs_list": [ 00:09:00.358 { 00:09:00.358 "name": "BaseBdev1", 00:09:00.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.358 "is_configured": false, 00:09:00.358 "data_offset": 0, 00:09:00.358 "data_size": 0 00:09:00.358 }, 00:09:00.358 { 00:09:00.358 "name": "BaseBdev2", 00:09:00.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.358 "is_configured": false, 00:09:00.358 "data_offset": 0, 00:09:00.358 "data_size": 0 00:09:00.358 }, 00:09:00.358 { 00:09:00.358 "name": "BaseBdev3", 00:09:00.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.358 "is_configured": false, 00:09:00.358 "data_offset": 0, 00:09:00.358 "data_size": 0 00:09:00.358 } 00:09:00.358 ] 00:09:00.358 }' 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.358 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 [2024-12-12 09:22:34.664770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.928 [2024-12-12 09:22:34.664887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 [2024-12-12 09:22:34.672768] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.928 [2024-12-12 09:22:34.672855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.928 [2024-12-12 09:22:34.672885] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.928 [2024-12-12 09:22:34.672910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.928 [2024-12-12 09:22:34.672935] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.928 [2024-12-12 09:22:34.672967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 [2024-12-12 09:22:34.725663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.928 BaseBdev1 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 [ 00:09:00.928 { 00:09:00.928 "name": "BaseBdev1", 00:09:00.928 "aliases": [ 00:09:00.928 "fb9adb90-5bd9-4f30-afbc-e9260dc3f023" 00:09:00.928 ], 00:09:00.928 "product_name": "Malloc disk", 00:09:00.928 "block_size": 512, 00:09:00.928 "num_blocks": 65536, 00:09:00.928 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:00.928 "assigned_rate_limits": { 00:09:00.928 "rw_ios_per_sec": 0, 00:09:00.928 "rw_mbytes_per_sec": 0, 00:09:00.928 "r_mbytes_per_sec": 0, 00:09:00.928 "w_mbytes_per_sec": 0 00:09:00.928 }, 00:09:00.928 "claimed": true, 00:09:00.928 "claim_type": "exclusive_write", 00:09:00.928 "zoned": false, 00:09:00.928 "supported_io_types": { 00:09:00.928 "read": true, 00:09:00.928 "write": true, 00:09:00.928 "unmap": true, 00:09:00.928 "flush": true, 00:09:00.928 "reset": true, 00:09:00.928 "nvme_admin": false, 00:09:00.928 "nvme_io": false, 00:09:00.928 "nvme_io_md": false, 00:09:00.928 "write_zeroes": true, 00:09:00.928 "zcopy": true, 00:09:00.928 "get_zone_info": false, 00:09:00.928 "zone_management": false, 00:09:00.928 "zone_append": false, 00:09:00.928 "compare": false, 00:09:00.928 "compare_and_write": false, 00:09:00.928 "abort": true, 00:09:00.928 "seek_hole": false, 00:09:00.928 "seek_data": false, 00:09:00.928 "copy": true, 00:09:00.928 "nvme_iov_md": false 00:09:00.928 }, 00:09:00.928 "memory_domains": [ 00:09:00.928 { 00:09:00.928 "dma_device_id": "system", 00:09:00.928 "dma_device_type": 1 00:09:00.928 }, 00:09:00.928 { 00:09:00.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.928 "dma_device_type": 2 00:09:00.928 } 00:09:00.928 ], 00:09:00.928 "driver_specific": {} 00:09:00.928 } 00:09:00.928 ] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.928 "name": "Existed_Raid", 00:09:00.928 "uuid": "2611080a-1be5-4be9-b69c-ded5138295dc", 00:09:00.928 "strip_size_kb": 64, 00:09:00.928 "state": "configuring", 00:09:00.928 "raid_level": "concat", 00:09:00.928 "superblock": true, 00:09:00.928 "num_base_bdevs": 3, 00:09:00.928 "num_base_bdevs_discovered": 1, 00:09:00.928 "num_base_bdevs_operational": 3, 00:09:00.928 "base_bdevs_list": [ 00:09:00.928 { 00:09:00.928 "name": "BaseBdev1", 00:09:00.928 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:00.928 "is_configured": true, 00:09:00.928 "data_offset": 2048, 00:09:00.928 "data_size": 63488 00:09:00.928 }, 00:09:00.928 { 00:09:00.928 "name": "BaseBdev2", 00:09:00.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.928 "is_configured": false, 00:09:00.928 "data_offset": 0, 00:09:00.928 "data_size": 0 00:09:00.928 }, 00:09:00.928 { 00:09:00.928 "name": "BaseBdev3", 00:09:00.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.928 "is_configured": false, 00:09:00.928 "data_offset": 0, 00:09:00.928 "data_size": 0 00:09:00.928 } 00:09:00.928 ] 00:09:00.928 }' 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.928 09:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.188 [2024-12-12 09:22:35.177008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.188 [2024-12-12 09:22:35.177058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.188 [2024-12-12 09:22:35.189043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.188 [2024-12-12 09:22:35.191170] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.188 [2024-12-12 09:22:35.191215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.188 [2024-12-12 09:22:35.191225] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.188 [2024-12-12 09:22:35.191234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.188 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.448 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.448 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.448 "name": "Existed_Raid", 00:09:01.448 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:01.448 "strip_size_kb": 64, 00:09:01.448 "state": "configuring", 00:09:01.448 "raid_level": "concat", 00:09:01.448 "superblock": true, 00:09:01.448 "num_base_bdevs": 3, 00:09:01.448 "num_base_bdevs_discovered": 1, 00:09:01.448 "num_base_bdevs_operational": 3, 00:09:01.448 "base_bdevs_list": [ 00:09:01.448 { 00:09:01.448 "name": "BaseBdev1", 00:09:01.448 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:01.448 "is_configured": true, 00:09:01.448 "data_offset": 2048, 00:09:01.448 "data_size": 63488 00:09:01.448 }, 00:09:01.448 { 00:09:01.448 "name": "BaseBdev2", 00:09:01.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.448 "is_configured": false, 00:09:01.448 "data_offset": 0, 00:09:01.448 "data_size": 0 00:09:01.448 }, 00:09:01.448 { 00:09:01.448 "name": "BaseBdev3", 00:09:01.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.448 "is_configured": false, 00:09:01.448 "data_offset": 0, 00:09:01.448 "data_size": 0 00:09:01.448 } 00:09:01.448 ] 00:09:01.448 }' 00:09:01.448 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.448 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.707 [2024-12-12 09:22:35.609101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.707 BaseBdev2 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:01.707 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.708 [ 00:09:01.708 { 00:09:01.708 "name": "BaseBdev2", 00:09:01.708 "aliases": [ 00:09:01.708 "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2" 00:09:01.708 ], 00:09:01.708 "product_name": "Malloc disk", 00:09:01.708 "block_size": 512, 00:09:01.708 "num_blocks": 65536, 00:09:01.708 "uuid": "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2", 00:09:01.708 "assigned_rate_limits": { 00:09:01.708 "rw_ios_per_sec": 0, 00:09:01.708 "rw_mbytes_per_sec": 0, 00:09:01.708 "r_mbytes_per_sec": 0, 00:09:01.708 "w_mbytes_per_sec": 0 00:09:01.708 }, 00:09:01.708 "claimed": true, 00:09:01.708 "claim_type": "exclusive_write", 00:09:01.708 "zoned": false, 00:09:01.708 "supported_io_types": { 00:09:01.708 "read": true, 00:09:01.708 "write": true, 00:09:01.708 "unmap": true, 00:09:01.708 "flush": true, 00:09:01.708 "reset": true, 00:09:01.708 "nvme_admin": false, 00:09:01.708 "nvme_io": false, 00:09:01.708 "nvme_io_md": false, 00:09:01.708 "write_zeroes": true, 00:09:01.708 "zcopy": true, 00:09:01.708 "get_zone_info": false, 00:09:01.708 "zone_management": false, 00:09:01.708 "zone_append": false, 00:09:01.708 "compare": false, 00:09:01.708 "compare_and_write": false, 00:09:01.708 "abort": true, 00:09:01.708 "seek_hole": false, 00:09:01.708 "seek_data": false, 00:09:01.708 "copy": true, 00:09:01.708 "nvme_iov_md": false 00:09:01.708 }, 00:09:01.708 "memory_domains": [ 00:09:01.708 { 00:09:01.708 "dma_device_id": "system", 00:09:01.708 "dma_device_type": 1 00:09:01.708 }, 00:09:01.708 { 00:09:01.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.708 "dma_device_type": 2 00:09:01.708 } 00:09:01.708 ], 00:09:01.708 "driver_specific": {} 00:09:01.708 } 00:09:01.708 ] 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.708 "name": "Existed_Raid", 00:09:01.708 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:01.708 "strip_size_kb": 64, 00:09:01.708 "state": "configuring", 00:09:01.708 "raid_level": "concat", 00:09:01.708 "superblock": true, 00:09:01.708 "num_base_bdevs": 3, 00:09:01.708 "num_base_bdevs_discovered": 2, 00:09:01.708 "num_base_bdevs_operational": 3, 00:09:01.708 "base_bdevs_list": [ 00:09:01.708 { 00:09:01.708 "name": "BaseBdev1", 00:09:01.708 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:01.708 "is_configured": true, 00:09:01.708 "data_offset": 2048, 00:09:01.708 "data_size": 63488 00:09:01.708 }, 00:09:01.708 { 00:09:01.708 "name": "BaseBdev2", 00:09:01.708 "uuid": "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2", 00:09:01.708 "is_configured": true, 00:09:01.708 "data_offset": 2048, 00:09:01.708 "data_size": 63488 00:09:01.708 }, 00:09:01.708 { 00:09:01.708 "name": "BaseBdev3", 00:09:01.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.708 "is_configured": false, 00:09:01.708 "data_offset": 0, 00:09:01.708 "data_size": 0 00:09:01.708 } 00:09:01.708 ] 00:09:01.708 }' 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.708 09:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.277 [2024-12-12 09:22:36.144542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.277 [2024-12-12 09:22:36.144935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.277 [2024-12-12 09:22:36.144985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.277 [2024-12-12 09:22:36.145295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:02.277 [2024-12-12 09:22:36.145471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.277 BaseBdev3 00:09:02.277 [2024-12-12 09:22:36.145482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:02.277 [2024-12-12 09:22:36.145640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.277 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.278 [ 00:09:02.278 { 00:09:02.278 "name": "BaseBdev3", 00:09:02.278 "aliases": [ 00:09:02.278 "cde64ed5-3d1a-4e12-9828-1765b43f7777" 00:09:02.278 ], 00:09:02.278 "product_name": "Malloc disk", 00:09:02.278 "block_size": 512, 00:09:02.278 "num_blocks": 65536, 00:09:02.278 "uuid": "cde64ed5-3d1a-4e12-9828-1765b43f7777", 00:09:02.278 "assigned_rate_limits": { 00:09:02.278 "rw_ios_per_sec": 0, 00:09:02.278 "rw_mbytes_per_sec": 0, 00:09:02.278 "r_mbytes_per_sec": 0, 00:09:02.278 "w_mbytes_per_sec": 0 00:09:02.278 }, 00:09:02.278 "claimed": true, 00:09:02.278 "claim_type": "exclusive_write", 00:09:02.278 "zoned": false, 00:09:02.278 "supported_io_types": { 00:09:02.278 "read": true, 00:09:02.278 "write": true, 00:09:02.278 "unmap": true, 00:09:02.278 "flush": true, 00:09:02.278 "reset": true, 00:09:02.278 "nvme_admin": false, 00:09:02.278 "nvme_io": false, 00:09:02.278 "nvme_io_md": false, 00:09:02.278 "write_zeroes": true, 00:09:02.278 "zcopy": true, 00:09:02.278 "get_zone_info": false, 00:09:02.278 "zone_management": false, 00:09:02.278 "zone_append": false, 00:09:02.278 "compare": false, 00:09:02.278 "compare_and_write": false, 00:09:02.278 "abort": true, 00:09:02.278 "seek_hole": false, 00:09:02.278 "seek_data": false, 00:09:02.278 "copy": true, 00:09:02.278 "nvme_iov_md": false 00:09:02.278 }, 00:09:02.278 "memory_domains": [ 00:09:02.278 { 00:09:02.278 "dma_device_id": "system", 00:09:02.278 "dma_device_type": 1 00:09:02.278 }, 00:09:02.278 { 00:09:02.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.278 "dma_device_type": 2 00:09:02.278 } 00:09:02.278 ], 00:09:02.278 "driver_specific": {} 00:09:02.278 } 00:09:02.278 ] 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.278 "name": "Existed_Raid", 00:09:02.278 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:02.278 "strip_size_kb": 64, 00:09:02.278 "state": "online", 00:09:02.278 "raid_level": "concat", 00:09:02.278 "superblock": true, 00:09:02.278 "num_base_bdevs": 3, 00:09:02.278 "num_base_bdevs_discovered": 3, 00:09:02.278 "num_base_bdevs_operational": 3, 00:09:02.278 "base_bdevs_list": [ 00:09:02.278 { 00:09:02.278 "name": "BaseBdev1", 00:09:02.278 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:02.278 "is_configured": true, 00:09:02.278 "data_offset": 2048, 00:09:02.278 "data_size": 63488 00:09:02.278 }, 00:09:02.278 { 00:09:02.278 "name": "BaseBdev2", 00:09:02.278 "uuid": "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2", 00:09:02.278 "is_configured": true, 00:09:02.278 "data_offset": 2048, 00:09:02.278 "data_size": 63488 00:09:02.278 }, 00:09:02.278 { 00:09:02.278 "name": "BaseBdev3", 00:09:02.278 "uuid": "cde64ed5-3d1a-4e12-9828-1765b43f7777", 00:09:02.278 "is_configured": true, 00:09:02.278 "data_offset": 2048, 00:09:02.278 "data_size": 63488 00:09:02.278 } 00:09:02.278 ] 00:09:02.278 }' 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.278 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.847 [2024-12-12 09:22:36.636029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.847 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.847 "name": "Existed_Raid", 00:09:02.847 "aliases": [ 00:09:02.847 "a22bdfc1-516a-409e-b36e-a4eff78e6cb4" 00:09:02.847 ], 00:09:02.847 "product_name": "Raid Volume", 00:09:02.847 "block_size": 512, 00:09:02.847 "num_blocks": 190464, 00:09:02.847 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:02.847 "assigned_rate_limits": { 00:09:02.847 "rw_ios_per_sec": 0, 00:09:02.847 "rw_mbytes_per_sec": 0, 00:09:02.847 "r_mbytes_per_sec": 0, 00:09:02.847 "w_mbytes_per_sec": 0 00:09:02.847 }, 00:09:02.847 "claimed": false, 00:09:02.847 "zoned": false, 00:09:02.847 "supported_io_types": { 00:09:02.847 "read": true, 00:09:02.847 "write": true, 00:09:02.847 "unmap": true, 00:09:02.847 "flush": true, 00:09:02.847 "reset": true, 00:09:02.847 "nvme_admin": false, 00:09:02.847 "nvme_io": false, 00:09:02.847 "nvme_io_md": false, 00:09:02.847 "write_zeroes": true, 00:09:02.847 "zcopy": false, 00:09:02.847 "get_zone_info": false, 00:09:02.847 "zone_management": false, 00:09:02.847 "zone_append": false, 00:09:02.847 "compare": false, 00:09:02.848 "compare_and_write": false, 00:09:02.848 "abort": false, 00:09:02.848 "seek_hole": false, 00:09:02.848 "seek_data": false, 00:09:02.848 "copy": false, 00:09:02.848 "nvme_iov_md": false 00:09:02.848 }, 00:09:02.848 "memory_domains": [ 00:09:02.848 { 00:09:02.848 "dma_device_id": "system", 00:09:02.848 "dma_device_type": 1 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.848 "dma_device_type": 2 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "dma_device_id": "system", 00:09:02.848 "dma_device_type": 1 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.848 "dma_device_type": 2 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "dma_device_id": "system", 00:09:02.848 "dma_device_type": 1 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.848 "dma_device_type": 2 00:09:02.848 } 00:09:02.848 ], 00:09:02.848 "driver_specific": { 00:09:02.848 "raid": { 00:09:02.848 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:02.848 "strip_size_kb": 64, 00:09:02.848 "state": "online", 00:09:02.848 "raid_level": "concat", 00:09:02.848 "superblock": true, 00:09:02.848 "num_base_bdevs": 3, 00:09:02.848 "num_base_bdevs_discovered": 3, 00:09:02.848 "num_base_bdevs_operational": 3, 00:09:02.848 "base_bdevs_list": [ 00:09:02.848 { 00:09:02.848 "name": "BaseBdev1", 00:09:02.848 "uuid": "fb9adb90-5bd9-4f30-afbc-e9260dc3f023", 00:09:02.848 "is_configured": true, 00:09:02.848 "data_offset": 2048, 00:09:02.848 "data_size": 63488 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "name": "BaseBdev2", 00:09:02.848 "uuid": "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2", 00:09:02.848 "is_configured": true, 00:09:02.848 "data_offset": 2048, 00:09:02.848 "data_size": 63488 00:09:02.848 }, 00:09:02.848 { 00:09:02.848 "name": "BaseBdev3", 00:09:02.848 "uuid": "cde64ed5-3d1a-4e12-9828-1765b43f7777", 00:09:02.848 "is_configured": true, 00:09:02.848 "data_offset": 2048, 00:09:02.848 "data_size": 63488 00:09:02.848 } 00:09:02.848 ] 00:09:02.848 } 00:09:02.848 } 00:09:02.848 }' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:02.848 BaseBdev2 00:09:02.848 BaseBdev3' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.848 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.108 [2024-12-12 09:22:36.895329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.108 [2024-12-12 09:22:36.895355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.108 [2024-12-12 09:22:36.895407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.108 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.108 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.108 "name": "Existed_Raid", 00:09:03.108 "uuid": "a22bdfc1-516a-409e-b36e-a4eff78e6cb4", 00:09:03.108 "strip_size_kb": 64, 00:09:03.108 "state": "offline", 00:09:03.108 "raid_level": "concat", 00:09:03.108 "superblock": true, 00:09:03.108 "num_base_bdevs": 3, 00:09:03.108 "num_base_bdevs_discovered": 2, 00:09:03.108 "num_base_bdevs_operational": 2, 00:09:03.108 "base_bdevs_list": [ 00:09:03.108 { 00:09:03.108 "name": null, 00:09:03.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.108 "is_configured": false, 00:09:03.108 "data_offset": 0, 00:09:03.108 "data_size": 63488 00:09:03.108 }, 00:09:03.108 { 00:09:03.108 "name": "BaseBdev2", 00:09:03.108 "uuid": "0c3b66d5-a744-4ead-8e5e-4acb9308b0a2", 00:09:03.108 "is_configured": true, 00:09:03.108 "data_offset": 2048, 00:09:03.108 "data_size": 63488 00:09:03.108 }, 00:09:03.108 { 00:09:03.109 "name": "BaseBdev3", 00:09:03.109 "uuid": "cde64ed5-3d1a-4e12-9828-1765b43f7777", 00:09:03.109 "is_configured": true, 00:09:03.109 "data_offset": 2048, 00:09:03.109 "data_size": 63488 00:09:03.109 } 00:09:03.109 ] 00:09:03.109 }' 00:09:03.109 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.109 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.678 [2024-12-12 09:22:37.469522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.678 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.678 [2024-12-12 09:22:37.633722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.678 [2024-12-12 09:22:37.633876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 BaseBdev2 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 [ 00:09:03.939 { 00:09:03.939 "name": "BaseBdev2", 00:09:03.939 "aliases": [ 00:09:03.939 "8a4a8241-f0b8-4bab-bbce-c53a35514917" 00:09:03.939 ], 00:09:03.939 "product_name": "Malloc disk", 00:09:03.939 "block_size": 512, 00:09:03.939 "num_blocks": 65536, 00:09:03.939 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:03.939 "assigned_rate_limits": { 00:09:03.939 "rw_ios_per_sec": 0, 00:09:03.939 "rw_mbytes_per_sec": 0, 00:09:03.939 "r_mbytes_per_sec": 0, 00:09:03.939 "w_mbytes_per_sec": 0 00:09:03.939 }, 00:09:03.939 "claimed": false, 00:09:03.939 "zoned": false, 00:09:03.939 "supported_io_types": { 00:09:03.939 "read": true, 00:09:03.939 "write": true, 00:09:03.939 "unmap": true, 00:09:03.939 "flush": true, 00:09:03.939 "reset": true, 00:09:03.939 "nvme_admin": false, 00:09:03.939 "nvme_io": false, 00:09:03.939 "nvme_io_md": false, 00:09:03.939 "write_zeroes": true, 00:09:03.939 "zcopy": true, 00:09:03.939 "get_zone_info": false, 00:09:03.939 "zone_management": false, 00:09:03.939 "zone_append": false, 00:09:03.939 "compare": false, 00:09:03.939 "compare_and_write": false, 00:09:03.939 "abort": true, 00:09:03.939 "seek_hole": false, 00:09:03.939 "seek_data": false, 00:09:03.939 "copy": true, 00:09:03.939 "nvme_iov_md": false 00:09:03.939 }, 00:09:03.939 "memory_domains": [ 00:09:03.939 { 00:09:03.939 "dma_device_id": "system", 00:09:03.939 "dma_device_type": 1 00:09:03.939 }, 00:09:03.939 { 00:09:03.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.939 "dma_device_type": 2 00:09:03.939 } 00:09:03.939 ], 00:09:03.939 "driver_specific": {} 00:09:03.939 } 00:09:03.939 ] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 BaseBdev3 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 [ 00:09:03.939 { 00:09:03.939 "name": "BaseBdev3", 00:09:03.939 "aliases": [ 00:09:03.939 "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0" 00:09:03.939 ], 00:09:03.939 "product_name": "Malloc disk", 00:09:03.939 "block_size": 512, 00:09:03.939 "num_blocks": 65536, 00:09:03.939 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:03.939 "assigned_rate_limits": { 00:09:03.939 "rw_ios_per_sec": 0, 00:09:03.939 "rw_mbytes_per_sec": 0, 00:09:03.939 "r_mbytes_per_sec": 0, 00:09:03.939 "w_mbytes_per_sec": 0 00:09:03.939 }, 00:09:03.939 "claimed": false, 00:09:03.939 "zoned": false, 00:09:03.939 "supported_io_types": { 00:09:03.939 "read": true, 00:09:03.939 "write": true, 00:09:03.939 "unmap": true, 00:09:03.939 "flush": true, 00:09:03.939 "reset": true, 00:09:03.939 "nvme_admin": false, 00:09:03.939 "nvme_io": false, 00:09:03.939 "nvme_io_md": false, 00:09:03.939 "write_zeroes": true, 00:09:03.939 "zcopy": true, 00:09:03.939 "get_zone_info": false, 00:09:03.939 "zone_management": false, 00:09:03.939 "zone_append": false, 00:09:03.939 "compare": false, 00:09:03.939 "compare_and_write": false, 00:09:03.939 "abort": true, 00:09:03.939 "seek_hole": false, 00:09:03.939 "seek_data": false, 00:09:03.939 "copy": true, 00:09:03.939 "nvme_iov_md": false 00:09:03.939 }, 00:09:03.939 "memory_domains": [ 00:09:03.939 { 00:09:03.939 "dma_device_id": "system", 00:09:03.939 "dma_device_type": 1 00:09:03.939 }, 00:09:03.939 { 00:09:03.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.939 "dma_device_type": 2 00:09:03.939 } 00:09:03.939 ], 00:09:03.939 "driver_specific": {} 00:09:03.939 } 00:09:03.939 ] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.939 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.939 [2024-12-12 09:22:37.955474] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.939 [2024-12-12 09:22:37.955610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.940 [2024-12-12 09:22:37.955650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.940 [2024-12-12 09:22:37.957618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.940 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.940 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.940 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.199 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.199 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.199 "name": "Existed_Raid", 00:09:04.199 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:04.199 "strip_size_kb": 64, 00:09:04.199 "state": "configuring", 00:09:04.199 "raid_level": "concat", 00:09:04.199 "superblock": true, 00:09:04.199 "num_base_bdevs": 3, 00:09:04.199 "num_base_bdevs_discovered": 2, 00:09:04.199 "num_base_bdevs_operational": 3, 00:09:04.199 "base_bdevs_list": [ 00:09:04.199 { 00:09:04.199 "name": "BaseBdev1", 00:09:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.199 "is_configured": false, 00:09:04.199 "data_offset": 0, 00:09:04.199 "data_size": 0 00:09:04.199 }, 00:09:04.199 { 00:09:04.199 "name": "BaseBdev2", 00:09:04.199 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:04.199 "is_configured": true, 00:09:04.199 "data_offset": 2048, 00:09:04.199 "data_size": 63488 00:09:04.199 }, 00:09:04.199 { 00:09:04.199 "name": "BaseBdev3", 00:09:04.199 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:04.199 "is_configured": true, 00:09:04.199 "data_offset": 2048, 00:09:04.199 "data_size": 63488 00:09:04.199 } 00:09:04.199 ] 00:09:04.199 }' 00:09:04.199 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.199 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 [2024-12-12 09:22:38.398708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.459 "name": "Existed_Raid", 00:09:04.459 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:04.459 "strip_size_kb": 64, 00:09:04.459 "state": "configuring", 00:09:04.459 "raid_level": "concat", 00:09:04.459 "superblock": true, 00:09:04.459 "num_base_bdevs": 3, 00:09:04.459 "num_base_bdevs_discovered": 1, 00:09:04.459 "num_base_bdevs_operational": 3, 00:09:04.459 "base_bdevs_list": [ 00:09:04.459 { 00:09:04.459 "name": "BaseBdev1", 00:09:04.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.459 "is_configured": false, 00:09:04.459 "data_offset": 0, 00:09:04.459 "data_size": 0 00:09:04.459 }, 00:09:04.459 { 00:09:04.459 "name": null, 00:09:04.459 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:04.459 "is_configured": false, 00:09:04.459 "data_offset": 0, 00:09:04.459 "data_size": 63488 00:09:04.459 }, 00:09:04.459 { 00:09:04.459 "name": "BaseBdev3", 00:09:04.459 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:04.459 "is_configured": true, 00:09:04.459 "data_offset": 2048, 00:09:04.459 "data_size": 63488 00:09:04.459 } 00:09:04.459 ] 00:09:04.459 }' 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.459 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.028 [2024-12-12 09:22:38.927014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.028 BaseBdev1 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.028 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.028 [ 00:09:05.028 { 00:09:05.028 "name": "BaseBdev1", 00:09:05.028 "aliases": [ 00:09:05.028 "c13add26-3ea5-4fc7-a04b-875c501cdaf4" 00:09:05.028 ], 00:09:05.028 "product_name": "Malloc disk", 00:09:05.028 "block_size": 512, 00:09:05.028 "num_blocks": 65536, 00:09:05.028 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:05.028 "assigned_rate_limits": { 00:09:05.028 "rw_ios_per_sec": 0, 00:09:05.028 "rw_mbytes_per_sec": 0, 00:09:05.028 "r_mbytes_per_sec": 0, 00:09:05.028 "w_mbytes_per_sec": 0 00:09:05.028 }, 00:09:05.028 "claimed": true, 00:09:05.028 "claim_type": "exclusive_write", 00:09:05.028 "zoned": false, 00:09:05.028 "supported_io_types": { 00:09:05.028 "read": true, 00:09:05.028 "write": true, 00:09:05.028 "unmap": true, 00:09:05.028 "flush": true, 00:09:05.028 "reset": true, 00:09:05.028 "nvme_admin": false, 00:09:05.028 "nvme_io": false, 00:09:05.028 "nvme_io_md": false, 00:09:05.028 "write_zeroes": true, 00:09:05.028 "zcopy": true, 00:09:05.028 "get_zone_info": false, 00:09:05.029 "zone_management": false, 00:09:05.029 "zone_append": false, 00:09:05.029 "compare": false, 00:09:05.029 "compare_and_write": false, 00:09:05.029 "abort": true, 00:09:05.029 "seek_hole": false, 00:09:05.029 "seek_data": false, 00:09:05.029 "copy": true, 00:09:05.029 "nvme_iov_md": false 00:09:05.029 }, 00:09:05.029 "memory_domains": [ 00:09:05.029 { 00:09:05.029 "dma_device_id": "system", 00:09:05.029 "dma_device_type": 1 00:09:05.029 }, 00:09:05.029 { 00:09:05.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.029 "dma_device_type": 2 00:09:05.029 } 00:09:05.029 ], 00:09:05.029 "driver_specific": {} 00:09:05.029 } 00:09:05.029 ] 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.029 "name": "Existed_Raid", 00:09:05.029 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:05.029 "strip_size_kb": 64, 00:09:05.029 "state": "configuring", 00:09:05.029 "raid_level": "concat", 00:09:05.029 "superblock": true, 00:09:05.029 "num_base_bdevs": 3, 00:09:05.029 "num_base_bdevs_discovered": 2, 00:09:05.029 "num_base_bdevs_operational": 3, 00:09:05.029 "base_bdevs_list": [ 00:09:05.029 { 00:09:05.029 "name": "BaseBdev1", 00:09:05.029 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:05.029 "is_configured": true, 00:09:05.029 "data_offset": 2048, 00:09:05.029 "data_size": 63488 00:09:05.029 }, 00:09:05.029 { 00:09:05.029 "name": null, 00:09:05.029 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:05.029 "is_configured": false, 00:09:05.029 "data_offset": 0, 00:09:05.029 "data_size": 63488 00:09:05.029 }, 00:09:05.029 { 00:09:05.029 "name": "BaseBdev3", 00:09:05.029 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:05.029 "is_configured": true, 00:09:05.029 "data_offset": 2048, 00:09:05.029 "data_size": 63488 00:09:05.029 } 00:09:05.029 ] 00:09:05.029 }' 00:09:05.029 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.029 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.597 [2024-12-12 09:22:39.462090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.597 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.597 "name": "Existed_Raid", 00:09:05.597 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:05.597 "strip_size_kb": 64, 00:09:05.597 "state": "configuring", 00:09:05.597 "raid_level": "concat", 00:09:05.597 "superblock": true, 00:09:05.597 "num_base_bdevs": 3, 00:09:05.597 "num_base_bdevs_discovered": 1, 00:09:05.597 "num_base_bdevs_operational": 3, 00:09:05.597 "base_bdevs_list": [ 00:09:05.597 { 00:09:05.597 "name": "BaseBdev1", 00:09:05.597 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:05.597 "is_configured": true, 00:09:05.597 "data_offset": 2048, 00:09:05.597 "data_size": 63488 00:09:05.597 }, 00:09:05.597 { 00:09:05.597 "name": null, 00:09:05.597 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:05.597 "is_configured": false, 00:09:05.597 "data_offset": 0, 00:09:05.597 "data_size": 63488 00:09:05.597 }, 00:09:05.597 { 00:09:05.597 "name": null, 00:09:05.597 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:05.597 "is_configured": false, 00:09:05.597 "data_offset": 0, 00:09:05.598 "data_size": 63488 00:09:05.598 } 00:09:05.598 ] 00:09:05.598 }' 00:09:05.598 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.598 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.167 [2024-12-12 09:22:39.949304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.167 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.167 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.167 "name": "Existed_Raid", 00:09:06.167 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:06.167 "strip_size_kb": 64, 00:09:06.167 "state": "configuring", 00:09:06.167 "raid_level": "concat", 00:09:06.167 "superblock": true, 00:09:06.167 "num_base_bdevs": 3, 00:09:06.167 "num_base_bdevs_discovered": 2, 00:09:06.167 "num_base_bdevs_operational": 3, 00:09:06.167 "base_bdevs_list": [ 00:09:06.167 { 00:09:06.167 "name": "BaseBdev1", 00:09:06.167 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:06.167 "is_configured": true, 00:09:06.167 "data_offset": 2048, 00:09:06.167 "data_size": 63488 00:09:06.167 }, 00:09:06.167 { 00:09:06.167 "name": null, 00:09:06.167 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:06.167 "is_configured": false, 00:09:06.167 "data_offset": 0, 00:09:06.167 "data_size": 63488 00:09:06.167 }, 00:09:06.167 { 00:09:06.167 "name": "BaseBdev3", 00:09:06.167 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:06.167 "is_configured": true, 00:09:06.167 "data_offset": 2048, 00:09:06.167 "data_size": 63488 00:09:06.167 } 00:09:06.167 ] 00:09:06.167 }' 00:09:06.167 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.167 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.427 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.427 [2024-12-12 09:22:40.424581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.686 "name": "Existed_Raid", 00:09:06.686 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:06.686 "strip_size_kb": 64, 00:09:06.686 "state": "configuring", 00:09:06.686 "raid_level": "concat", 00:09:06.686 "superblock": true, 00:09:06.686 "num_base_bdevs": 3, 00:09:06.686 "num_base_bdevs_discovered": 1, 00:09:06.686 "num_base_bdevs_operational": 3, 00:09:06.686 "base_bdevs_list": [ 00:09:06.686 { 00:09:06.686 "name": null, 00:09:06.686 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:06.686 "is_configured": false, 00:09:06.686 "data_offset": 0, 00:09:06.686 "data_size": 63488 00:09:06.686 }, 00:09:06.686 { 00:09:06.686 "name": null, 00:09:06.686 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:06.686 "is_configured": false, 00:09:06.686 "data_offset": 0, 00:09:06.686 "data_size": 63488 00:09:06.686 }, 00:09:06.686 { 00:09:06.686 "name": "BaseBdev3", 00:09:06.686 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:06.686 "is_configured": true, 00:09:06.686 "data_offset": 2048, 00:09:06.686 "data_size": 63488 00:09:06.686 } 00:09:06.686 ] 00:09:06.686 }' 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.686 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.946 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.946 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.946 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.946 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.205 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.205 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.205 [2024-12-12 09:22:41.006417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.205 "name": "Existed_Raid", 00:09:07.205 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:07.205 "strip_size_kb": 64, 00:09:07.205 "state": "configuring", 00:09:07.205 "raid_level": "concat", 00:09:07.205 "superblock": true, 00:09:07.205 "num_base_bdevs": 3, 00:09:07.205 "num_base_bdevs_discovered": 2, 00:09:07.205 "num_base_bdevs_operational": 3, 00:09:07.205 "base_bdevs_list": [ 00:09:07.205 { 00:09:07.205 "name": null, 00:09:07.205 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:07.205 "is_configured": false, 00:09:07.205 "data_offset": 0, 00:09:07.205 "data_size": 63488 00:09:07.205 }, 00:09:07.205 { 00:09:07.205 "name": "BaseBdev2", 00:09:07.205 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:07.205 "is_configured": true, 00:09:07.205 "data_offset": 2048, 00:09:07.205 "data_size": 63488 00:09:07.205 }, 00:09:07.205 { 00:09:07.205 "name": "BaseBdev3", 00:09:07.205 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:07.205 "is_configured": true, 00:09:07.205 "data_offset": 2048, 00:09:07.205 "data_size": 63488 00:09:07.205 } 00:09:07.205 ] 00:09:07.205 }' 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.205 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.465 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.465 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.465 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.465 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.465 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c13add26-3ea5-4fc7-a04b-875c501cdaf4 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 [2024-12-12 09:22:41.585573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:07.724 [2024-12-12 09:22:41.585867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.724 [2024-12-12 09:22:41.585890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.724 [2024-12-12 09:22:41.586177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.724 [2024-12-12 09:22:41.586333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.724 [2024-12-12 09:22:41.586343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:07.724 NewBaseBdev 00:09:07.724 [2024-12-12 09:22:41.586493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 [ 00:09:07.724 { 00:09:07.724 "name": "NewBaseBdev", 00:09:07.724 "aliases": [ 00:09:07.724 "c13add26-3ea5-4fc7-a04b-875c501cdaf4" 00:09:07.724 ], 00:09:07.724 "product_name": "Malloc disk", 00:09:07.724 "block_size": 512, 00:09:07.724 "num_blocks": 65536, 00:09:07.724 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:07.724 "assigned_rate_limits": { 00:09:07.724 "rw_ios_per_sec": 0, 00:09:07.724 "rw_mbytes_per_sec": 0, 00:09:07.724 "r_mbytes_per_sec": 0, 00:09:07.724 "w_mbytes_per_sec": 0 00:09:07.724 }, 00:09:07.724 "claimed": true, 00:09:07.724 "claim_type": "exclusive_write", 00:09:07.724 "zoned": false, 00:09:07.724 "supported_io_types": { 00:09:07.724 "read": true, 00:09:07.724 "write": true, 00:09:07.724 "unmap": true, 00:09:07.724 "flush": true, 00:09:07.724 "reset": true, 00:09:07.724 "nvme_admin": false, 00:09:07.724 "nvme_io": false, 00:09:07.724 "nvme_io_md": false, 00:09:07.724 "write_zeroes": true, 00:09:07.724 "zcopy": true, 00:09:07.724 "get_zone_info": false, 00:09:07.724 "zone_management": false, 00:09:07.724 "zone_append": false, 00:09:07.724 "compare": false, 00:09:07.724 "compare_and_write": false, 00:09:07.724 "abort": true, 00:09:07.724 "seek_hole": false, 00:09:07.724 "seek_data": false, 00:09:07.724 "copy": true, 00:09:07.724 "nvme_iov_md": false 00:09:07.724 }, 00:09:07.724 "memory_domains": [ 00:09:07.724 { 00:09:07.724 "dma_device_id": "system", 00:09:07.724 "dma_device_type": 1 00:09:07.724 }, 00:09:07.724 { 00:09:07.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.724 "dma_device_type": 2 00:09:07.724 } 00:09:07.724 ], 00:09:07.724 "driver_specific": {} 00:09:07.724 } 00:09:07.724 ] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.724 "name": "Existed_Raid", 00:09:07.724 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:07.724 "strip_size_kb": 64, 00:09:07.724 "state": "online", 00:09:07.724 "raid_level": "concat", 00:09:07.724 "superblock": true, 00:09:07.724 "num_base_bdevs": 3, 00:09:07.724 "num_base_bdevs_discovered": 3, 00:09:07.724 "num_base_bdevs_operational": 3, 00:09:07.724 "base_bdevs_list": [ 00:09:07.724 { 00:09:07.724 "name": "NewBaseBdev", 00:09:07.724 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:07.724 "is_configured": true, 00:09:07.724 "data_offset": 2048, 00:09:07.724 "data_size": 63488 00:09:07.724 }, 00:09:07.724 { 00:09:07.724 "name": "BaseBdev2", 00:09:07.724 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:07.724 "is_configured": true, 00:09:07.724 "data_offset": 2048, 00:09:07.724 "data_size": 63488 00:09:07.724 }, 00:09:07.724 { 00:09:07.724 "name": "BaseBdev3", 00:09:07.724 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:07.724 "is_configured": true, 00:09:07.724 "data_offset": 2048, 00:09:07.724 "data_size": 63488 00:09:07.724 } 00:09:07.724 ] 00:09:07.724 }' 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.724 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.305 [2024-12-12 09:22:42.101388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.305 "name": "Existed_Raid", 00:09:08.305 "aliases": [ 00:09:08.305 "ca477378-f6a2-455c-b4d0-399c5a0ca08f" 00:09:08.305 ], 00:09:08.305 "product_name": "Raid Volume", 00:09:08.305 "block_size": 512, 00:09:08.305 "num_blocks": 190464, 00:09:08.305 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:08.305 "assigned_rate_limits": { 00:09:08.305 "rw_ios_per_sec": 0, 00:09:08.305 "rw_mbytes_per_sec": 0, 00:09:08.305 "r_mbytes_per_sec": 0, 00:09:08.305 "w_mbytes_per_sec": 0 00:09:08.305 }, 00:09:08.305 "claimed": false, 00:09:08.305 "zoned": false, 00:09:08.305 "supported_io_types": { 00:09:08.305 "read": true, 00:09:08.305 "write": true, 00:09:08.305 "unmap": true, 00:09:08.305 "flush": true, 00:09:08.305 "reset": true, 00:09:08.305 "nvme_admin": false, 00:09:08.305 "nvme_io": false, 00:09:08.305 "nvme_io_md": false, 00:09:08.305 "write_zeroes": true, 00:09:08.305 "zcopy": false, 00:09:08.305 "get_zone_info": false, 00:09:08.305 "zone_management": false, 00:09:08.305 "zone_append": false, 00:09:08.305 "compare": false, 00:09:08.305 "compare_and_write": false, 00:09:08.305 "abort": false, 00:09:08.305 "seek_hole": false, 00:09:08.305 "seek_data": false, 00:09:08.305 "copy": false, 00:09:08.305 "nvme_iov_md": false 00:09:08.305 }, 00:09:08.305 "memory_domains": [ 00:09:08.305 { 00:09:08.305 "dma_device_id": "system", 00:09:08.305 "dma_device_type": 1 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.305 "dma_device_type": 2 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "dma_device_id": "system", 00:09:08.305 "dma_device_type": 1 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.305 "dma_device_type": 2 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "dma_device_id": "system", 00:09:08.305 "dma_device_type": 1 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.305 "dma_device_type": 2 00:09:08.305 } 00:09:08.305 ], 00:09:08.305 "driver_specific": { 00:09:08.305 "raid": { 00:09:08.305 "uuid": "ca477378-f6a2-455c-b4d0-399c5a0ca08f", 00:09:08.305 "strip_size_kb": 64, 00:09:08.305 "state": "online", 00:09:08.305 "raid_level": "concat", 00:09:08.305 "superblock": true, 00:09:08.305 "num_base_bdevs": 3, 00:09:08.305 "num_base_bdevs_discovered": 3, 00:09:08.305 "num_base_bdevs_operational": 3, 00:09:08.305 "base_bdevs_list": [ 00:09:08.305 { 00:09:08.305 "name": "NewBaseBdev", 00:09:08.305 "uuid": "c13add26-3ea5-4fc7-a04b-875c501cdaf4", 00:09:08.305 "is_configured": true, 00:09:08.305 "data_offset": 2048, 00:09:08.305 "data_size": 63488 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "name": "BaseBdev2", 00:09:08.305 "uuid": "8a4a8241-f0b8-4bab-bbce-c53a35514917", 00:09:08.305 "is_configured": true, 00:09:08.305 "data_offset": 2048, 00:09:08.305 "data_size": 63488 00:09:08.305 }, 00:09:08.305 { 00:09:08.305 "name": "BaseBdev3", 00:09:08.305 "uuid": "c3b7cf5d-4a03-4779-8b6c-4ae85e5b9fa0", 00:09:08.305 "is_configured": true, 00:09:08.305 "data_offset": 2048, 00:09:08.305 "data_size": 63488 00:09:08.305 } 00:09:08.305 ] 00:09:08.305 } 00:09:08.305 } 00:09:08.305 }' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:08.305 BaseBdev2 00:09:08.305 BaseBdev3' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.305 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.565 [2024-12-12 09:22:42.384816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.565 [2024-12-12 09:22:42.384852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.565 [2024-12-12 09:22:42.384945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.565 [2024-12-12 09:22:42.385019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.565 [2024-12-12 09:22:42.385033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67376 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67376 ']' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67376 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67376 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67376' 00:09:08.565 killing process with pid 67376 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67376 00:09:08.565 [2024-12-12 09:22:42.433730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.565 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67376 00:09:08.824 [2024-12-12 09:22:42.757691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.204 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:10.204 00:09:10.204 real 0m10.660s 00:09:10.204 user 0m16.707s 00:09:10.204 sys 0m1.968s 00:09:10.204 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.204 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.204 ************************************ 00:09:10.204 END TEST raid_state_function_test_sb 00:09:10.204 ************************************ 00:09:10.204 09:22:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:10.205 09:22:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:10.205 09:22:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.205 09:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.205 ************************************ 00:09:10.205 START TEST raid_superblock_test 00:09:10.205 ************************************ 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67993 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67993 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67993 ']' 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.205 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.205 [2024-12-12 09:22:44.129338] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:10.205 [2024-12-12 09:22:44.129509] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67993 ] 00:09:10.465 [2024-12-12 09:22:44.301015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.465 [2024-12-12 09:22:44.439032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.724 [2024-12-12 09:22:44.664834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.724 [2024-12-12 09:22:44.665003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 malloc1 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.984 09:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 [2024-12-12 09:22:45.004695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.984 [2024-12-12 09:22:45.004772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.985 [2024-12-12 09:22:45.004797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.985 [2024-12-12 09:22:45.004806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.985 [2024-12-12 09:22:45.007123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.244 [2024-12-12 09:22:45.007241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.244 pt1 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.244 malloc2 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.244 [2024-12-12 09:22:45.064121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.244 [2024-12-12 09:22:45.064275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.244 [2024-12-12 09:22:45.064320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:11.244 [2024-12-12 09:22:45.064354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.244 [2024-12-12 09:22:45.066778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.244 [2024-12-12 09:22:45.066848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.244 pt2 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.244 malloc3 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.244 [2024-12-12 09:22:45.140090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:11.244 [2024-12-12 09:22:45.140206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.244 [2024-12-12 09:22:45.140247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:11.244 [2024-12-12 09:22:45.140275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.244 [2024-12-12 09:22:45.142586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.244 [2024-12-12 09:22:45.142658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:11.244 pt3 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.244 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.244 [2024-12-12 09:22:45.152123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.244 [2024-12-12 09:22:45.154125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.244 [2024-12-12 09:22:45.154189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:11.244 [2024-12-12 09:22:45.154339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:11.244 [2024-12-12 09:22:45.154352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.244 [2024-12-12 09:22:45.154587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.244 [2024-12-12 09:22:45.154732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:11.244 [2024-12-12 09:22:45.154741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:11.244 [2024-12-12 09:22:45.154877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.245 "name": "raid_bdev1", 00:09:11.245 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:11.245 "strip_size_kb": 64, 00:09:11.245 "state": "online", 00:09:11.245 "raid_level": "concat", 00:09:11.245 "superblock": true, 00:09:11.245 "num_base_bdevs": 3, 00:09:11.245 "num_base_bdevs_discovered": 3, 00:09:11.245 "num_base_bdevs_operational": 3, 00:09:11.245 "base_bdevs_list": [ 00:09:11.245 { 00:09:11.245 "name": "pt1", 00:09:11.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.245 "is_configured": true, 00:09:11.245 "data_offset": 2048, 00:09:11.245 "data_size": 63488 00:09:11.245 }, 00:09:11.245 { 00:09:11.245 "name": "pt2", 00:09:11.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.245 "is_configured": true, 00:09:11.245 "data_offset": 2048, 00:09:11.245 "data_size": 63488 00:09:11.245 }, 00:09:11.245 { 00:09:11.245 "name": "pt3", 00:09:11.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.245 "is_configured": true, 00:09:11.245 "data_offset": 2048, 00:09:11.245 "data_size": 63488 00:09:11.245 } 00:09:11.245 ] 00:09:11.245 }' 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.245 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.815 [2024-12-12 09:22:45.635664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.815 "name": "raid_bdev1", 00:09:11.815 "aliases": [ 00:09:11.815 "17f2c133-8102-42eb-91f6-39eb18595616" 00:09:11.815 ], 00:09:11.815 "product_name": "Raid Volume", 00:09:11.815 "block_size": 512, 00:09:11.815 "num_blocks": 190464, 00:09:11.815 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:11.815 "assigned_rate_limits": { 00:09:11.815 "rw_ios_per_sec": 0, 00:09:11.815 "rw_mbytes_per_sec": 0, 00:09:11.815 "r_mbytes_per_sec": 0, 00:09:11.815 "w_mbytes_per_sec": 0 00:09:11.815 }, 00:09:11.815 "claimed": false, 00:09:11.815 "zoned": false, 00:09:11.815 "supported_io_types": { 00:09:11.815 "read": true, 00:09:11.815 "write": true, 00:09:11.815 "unmap": true, 00:09:11.815 "flush": true, 00:09:11.815 "reset": true, 00:09:11.815 "nvme_admin": false, 00:09:11.815 "nvme_io": false, 00:09:11.815 "nvme_io_md": false, 00:09:11.815 "write_zeroes": true, 00:09:11.815 "zcopy": false, 00:09:11.815 "get_zone_info": false, 00:09:11.815 "zone_management": false, 00:09:11.815 "zone_append": false, 00:09:11.815 "compare": false, 00:09:11.815 "compare_and_write": false, 00:09:11.815 "abort": false, 00:09:11.815 "seek_hole": false, 00:09:11.815 "seek_data": false, 00:09:11.815 "copy": false, 00:09:11.815 "nvme_iov_md": false 00:09:11.815 }, 00:09:11.815 "memory_domains": [ 00:09:11.815 { 00:09:11.815 "dma_device_id": "system", 00:09:11.815 "dma_device_type": 1 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.815 "dma_device_type": 2 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "dma_device_id": "system", 00:09:11.815 "dma_device_type": 1 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.815 "dma_device_type": 2 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "dma_device_id": "system", 00:09:11.815 "dma_device_type": 1 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.815 "dma_device_type": 2 00:09:11.815 } 00:09:11.815 ], 00:09:11.815 "driver_specific": { 00:09:11.815 "raid": { 00:09:11.815 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:11.815 "strip_size_kb": 64, 00:09:11.815 "state": "online", 00:09:11.815 "raid_level": "concat", 00:09:11.815 "superblock": true, 00:09:11.815 "num_base_bdevs": 3, 00:09:11.815 "num_base_bdevs_discovered": 3, 00:09:11.815 "num_base_bdevs_operational": 3, 00:09:11.815 "base_bdevs_list": [ 00:09:11.815 { 00:09:11.815 "name": "pt1", 00:09:11.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.815 "is_configured": true, 00:09:11.815 "data_offset": 2048, 00:09:11.815 "data_size": 63488 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "name": "pt2", 00:09:11.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.815 "is_configured": true, 00:09:11.815 "data_offset": 2048, 00:09:11.815 "data_size": 63488 00:09:11.815 }, 00:09:11.815 { 00:09:11.815 "name": "pt3", 00:09:11.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.815 "is_configured": true, 00:09:11.815 "data_offset": 2048, 00:09:11.815 "data_size": 63488 00:09:11.815 } 00:09:11.815 ] 00:09:11.815 } 00:09:11.815 } 00:09:11.815 }' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.815 pt2 00:09:11.815 pt3' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.815 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 [2024-12-12 09:22:45.943121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17f2c133-8102-42eb-91f6-39eb18595616 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 17f2c133-8102-42eb-91f6-39eb18595616 ']' 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 [2024-12-12 09:22:45.986704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.076 [2024-12-12 09:22:45.986779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.076 [2024-12-12 09:22:45.986892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.076 [2024-12-12 09:22:45.987007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.076 [2024-12-12 09:22:45.987049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 09:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.338 [2024-12-12 09:22:46.134510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:12.338 [2024-12-12 09:22:46.136677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:12.338 [2024-12-12 09:22:46.136735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:12.338 [2024-12-12 09:22:46.136789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:12.338 [2024-12-12 09:22:46.136843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:12.338 [2024-12-12 09:22:46.136861] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:12.338 [2024-12-12 09:22:46.136878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.338 [2024-12-12 09:22:46.136888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:12.338 request: 00:09:12.338 { 00:09:12.338 "name": "raid_bdev1", 00:09:12.338 "raid_level": "concat", 00:09:12.338 "base_bdevs": [ 00:09:12.338 "malloc1", 00:09:12.338 "malloc2", 00:09:12.338 "malloc3" 00:09:12.338 ], 00:09:12.338 "strip_size_kb": 64, 00:09:12.338 "superblock": false, 00:09:12.338 "method": "bdev_raid_create", 00:09:12.338 "req_id": 1 00:09:12.338 } 00:09:12.338 Got JSON-RPC error response 00:09:12.338 response: 00:09:12.338 { 00:09:12.338 "code": -17, 00:09:12.338 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:12.338 } 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.338 [2024-12-12 09:22:46.202339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:12.338 [2024-12-12 09:22:46.202469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.338 [2024-12-12 09:22:46.202515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:12.338 [2024-12-12 09:22:46.202547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.338 [2024-12-12 09:22:46.205168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.338 [2024-12-12 09:22:46.205257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:12.338 [2024-12-12 09:22:46.205379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:12.338 [2024-12-12 09:22:46.205492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:12.338 pt1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.338 "name": "raid_bdev1", 00:09:12.338 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:12.338 "strip_size_kb": 64, 00:09:12.338 "state": "configuring", 00:09:12.338 "raid_level": "concat", 00:09:12.338 "superblock": true, 00:09:12.338 "num_base_bdevs": 3, 00:09:12.338 "num_base_bdevs_discovered": 1, 00:09:12.338 "num_base_bdevs_operational": 3, 00:09:12.338 "base_bdevs_list": [ 00:09:12.338 { 00:09:12.338 "name": "pt1", 00:09:12.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.338 "is_configured": true, 00:09:12.338 "data_offset": 2048, 00:09:12.338 "data_size": 63488 00:09:12.338 }, 00:09:12.338 { 00:09:12.338 "name": null, 00:09:12.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.338 "is_configured": false, 00:09:12.338 "data_offset": 2048, 00:09:12.338 "data_size": 63488 00:09:12.338 }, 00:09:12.338 { 00:09:12.338 "name": null, 00:09:12.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.338 "is_configured": false, 00:09:12.338 "data_offset": 2048, 00:09:12.338 "data_size": 63488 00:09:12.338 } 00:09:12.338 ] 00:09:12.338 }' 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.338 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.907 [2024-12-12 09:22:46.661617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.907 [2024-12-12 09:22:46.661715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.907 [2024-12-12 09:22:46.661743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:12.907 [2024-12-12 09:22:46.661752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.907 [2024-12-12 09:22:46.662266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.907 [2024-12-12 09:22:46.662297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.907 [2024-12-12 09:22:46.662400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.907 [2024-12-12 09:22:46.662436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.907 pt2 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.907 [2024-12-12 09:22:46.669592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.907 "name": "raid_bdev1", 00:09:12.907 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:12.907 "strip_size_kb": 64, 00:09:12.907 "state": "configuring", 00:09:12.907 "raid_level": "concat", 00:09:12.907 "superblock": true, 00:09:12.907 "num_base_bdevs": 3, 00:09:12.907 "num_base_bdevs_discovered": 1, 00:09:12.907 "num_base_bdevs_operational": 3, 00:09:12.907 "base_bdevs_list": [ 00:09:12.907 { 00:09:12.907 "name": "pt1", 00:09:12.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.907 "is_configured": true, 00:09:12.907 "data_offset": 2048, 00:09:12.907 "data_size": 63488 00:09:12.907 }, 00:09:12.907 { 00:09:12.907 "name": null, 00:09:12.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.907 "is_configured": false, 00:09:12.907 "data_offset": 0, 00:09:12.907 "data_size": 63488 00:09:12.907 }, 00:09:12.907 { 00:09:12.907 "name": null, 00:09:12.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.907 "is_configured": false, 00:09:12.907 "data_offset": 2048, 00:09:12.907 "data_size": 63488 00:09:12.907 } 00:09:12.907 ] 00:09:12.907 }' 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.907 09:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.167 [2024-12-12 09:22:47.132820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.167 [2024-12-12 09:22:47.132997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.167 [2024-12-12 09:22:47.133035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:13.167 [2024-12-12 09:22:47.133064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.167 [2024-12-12 09:22:47.133613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.167 [2024-12-12 09:22:47.133684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.167 [2024-12-12 09:22:47.133813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:13.167 [2024-12-12 09:22:47.133867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.167 pt2 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.167 [2024-12-12 09:22:47.144758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:13.167 [2024-12-12 09:22:47.144845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.167 [2024-12-12 09:22:47.144874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:13.167 [2024-12-12 09:22:47.144901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.167 [2024-12-12 09:22:47.145308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.167 [2024-12-12 09:22:47.145369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:13.167 [2024-12-12 09:22:47.145452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:13.167 [2024-12-12 09:22:47.145507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:13.167 [2024-12-12 09:22:47.145648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.168 [2024-12-12 09:22:47.145686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.168 [2024-12-12 09:22:47.145950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:13.168 [2024-12-12 09:22:47.146144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.168 [2024-12-12 09:22:47.146181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:13.168 [2024-12-12 09:22:47.146355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.168 pt3 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.168 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.427 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.427 "name": "raid_bdev1", 00:09:13.427 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:13.427 "strip_size_kb": 64, 00:09:13.427 "state": "online", 00:09:13.427 "raid_level": "concat", 00:09:13.427 "superblock": true, 00:09:13.427 "num_base_bdevs": 3, 00:09:13.427 "num_base_bdevs_discovered": 3, 00:09:13.427 "num_base_bdevs_operational": 3, 00:09:13.427 "base_bdevs_list": [ 00:09:13.427 { 00:09:13.427 "name": "pt1", 00:09:13.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.427 "is_configured": true, 00:09:13.427 "data_offset": 2048, 00:09:13.427 "data_size": 63488 00:09:13.427 }, 00:09:13.427 { 00:09:13.427 "name": "pt2", 00:09:13.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.427 "is_configured": true, 00:09:13.427 "data_offset": 2048, 00:09:13.427 "data_size": 63488 00:09:13.427 }, 00:09:13.427 { 00:09:13.427 "name": "pt3", 00:09:13.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.427 "is_configured": true, 00:09:13.427 "data_offset": 2048, 00:09:13.427 "data_size": 63488 00:09:13.427 } 00:09:13.427 ] 00:09:13.427 }' 00:09:13.427 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.427 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.687 [2024-12-12 09:22:47.588381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.687 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.687 "name": "raid_bdev1", 00:09:13.687 "aliases": [ 00:09:13.687 "17f2c133-8102-42eb-91f6-39eb18595616" 00:09:13.687 ], 00:09:13.687 "product_name": "Raid Volume", 00:09:13.687 "block_size": 512, 00:09:13.687 "num_blocks": 190464, 00:09:13.687 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:13.687 "assigned_rate_limits": { 00:09:13.687 "rw_ios_per_sec": 0, 00:09:13.687 "rw_mbytes_per_sec": 0, 00:09:13.687 "r_mbytes_per_sec": 0, 00:09:13.687 "w_mbytes_per_sec": 0 00:09:13.687 }, 00:09:13.687 "claimed": false, 00:09:13.687 "zoned": false, 00:09:13.687 "supported_io_types": { 00:09:13.687 "read": true, 00:09:13.687 "write": true, 00:09:13.687 "unmap": true, 00:09:13.687 "flush": true, 00:09:13.687 "reset": true, 00:09:13.687 "nvme_admin": false, 00:09:13.687 "nvme_io": false, 00:09:13.687 "nvme_io_md": false, 00:09:13.687 "write_zeroes": true, 00:09:13.687 "zcopy": false, 00:09:13.687 "get_zone_info": false, 00:09:13.687 "zone_management": false, 00:09:13.687 "zone_append": false, 00:09:13.687 "compare": false, 00:09:13.688 "compare_and_write": false, 00:09:13.688 "abort": false, 00:09:13.688 "seek_hole": false, 00:09:13.688 "seek_data": false, 00:09:13.688 "copy": false, 00:09:13.688 "nvme_iov_md": false 00:09:13.688 }, 00:09:13.688 "memory_domains": [ 00:09:13.688 { 00:09:13.688 "dma_device_id": "system", 00:09:13.688 "dma_device_type": 1 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.688 "dma_device_type": 2 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "dma_device_id": "system", 00:09:13.688 "dma_device_type": 1 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.688 "dma_device_type": 2 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "dma_device_id": "system", 00:09:13.688 "dma_device_type": 1 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.688 "dma_device_type": 2 00:09:13.688 } 00:09:13.688 ], 00:09:13.688 "driver_specific": { 00:09:13.688 "raid": { 00:09:13.688 "uuid": "17f2c133-8102-42eb-91f6-39eb18595616", 00:09:13.688 "strip_size_kb": 64, 00:09:13.688 "state": "online", 00:09:13.688 "raid_level": "concat", 00:09:13.688 "superblock": true, 00:09:13.688 "num_base_bdevs": 3, 00:09:13.688 "num_base_bdevs_discovered": 3, 00:09:13.688 "num_base_bdevs_operational": 3, 00:09:13.688 "base_bdevs_list": [ 00:09:13.688 { 00:09:13.688 "name": "pt1", 00:09:13.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.688 "is_configured": true, 00:09:13.688 "data_offset": 2048, 00:09:13.688 "data_size": 63488 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "name": "pt2", 00:09:13.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.688 "is_configured": true, 00:09:13.688 "data_offset": 2048, 00:09:13.688 "data_size": 63488 00:09:13.688 }, 00:09:13.688 { 00:09:13.688 "name": "pt3", 00:09:13.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.688 "is_configured": true, 00:09:13.688 "data_offset": 2048, 00:09:13.688 "data_size": 63488 00:09:13.688 } 00:09:13.688 ] 00:09:13.688 } 00:09:13.688 } 00:09:13.688 }' 00:09:13.688 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.688 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.688 pt2 00:09:13.688 pt3' 00:09:13.688 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.688 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.688 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.948 [2024-12-12 09:22:47.851876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 17f2c133-8102-42eb-91f6-39eb18595616 '!=' 17f2c133-8102-42eb-91f6-39eb18595616 ']' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67993 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67993 ']' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67993 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67993 00:09:13.948 killing process with pid 67993 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67993' 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67993 00:09:13.948 [2024-12-12 09:22:47.938407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.948 [2024-12-12 09:22:47.938497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.948 [2024-12-12 09:22:47.938562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.948 [2024-12-12 09:22:47.938574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:13.948 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67993 00:09:14.518 [2024-12-12 09:22:48.269591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.457 ************************************ 00:09:15.457 END TEST raid_superblock_test 00:09:15.457 ************************************ 00:09:15.457 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:15.457 00:09:15.457 real 0m5.408s 00:09:15.457 user 0m7.637s 00:09:15.457 sys 0m1.004s 00:09:15.457 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.457 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.717 09:22:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:15.717 09:22:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:15.717 09:22:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.717 09:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.717 ************************************ 00:09:15.717 START TEST raid_read_error_test 00:09:15.717 ************************************ 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BGl7JymLdR 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68256 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68256 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68256 ']' 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.717 09:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.717 [2024-12-12 09:22:49.617911] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:15.717 [2024-12-12 09:22:49.618033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68256 ] 00:09:15.976 [2024-12-12 09:22:49.792334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.976 [2024-12-12 09:22:49.914747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.234 [2024-12-12 09:22:50.133767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.234 [2024-12-12 09:22:50.133816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 BaseBdev1_malloc 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 true 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 [2024-12-12 09:22:50.506842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:16.493 [2024-12-12 09:22:50.506903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.493 [2024-12-12 09:22:50.506923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:16.493 [2024-12-12 09:22:50.506934] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.493 [2024-12-12 09:22:50.509194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.493 [2024-12-12 09:22:50.509229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:16.493 BaseBdev1 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 BaseBdev2_malloc 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 true 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 [2024-12-12 09:22:50.578339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:16.753 [2024-12-12 09:22:50.578392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.753 [2024-12-12 09:22:50.578409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:16.753 [2024-12-12 09:22:50.578420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.753 [2024-12-12 09:22:50.580777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.753 [2024-12-12 09:22:50.580812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:16.753 BaseBdev2 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 BaseBdev3_malloc 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 true 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 [2024-12-12 09:22:50.682343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:16.753 [2024-12-12 09:22:50.682395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.753 [2024-12-12 09:22:50.682412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:16.753 [2024-12-12 09:22:50.682423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.753 [2024-12-12 09:22:50.684696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.753 [2024-12-12 09:22:50.684729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:16.753 BaseBdev3 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 [2024-12-12 09:22:50.694406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.753 [2024-12-12 09:22:50.696376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.753 [2024-12-12 09:22:50.696450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.753 [2024-12-12 09:22:50.696662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.753 [2024-12-12 09:22:50.696680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:16.753 [2024-12-12 09:22:50.696903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:16.753 [2024-12-12 09:22:50.697078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.753 [2024-12-12 09:22:50.697097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:16.753 [2024-12-12 09:22:50.697230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.753 "name": "raid_bdev1", 00:09:16.753 "uuid": "2d3ea5fb-2d93-4b20-a6d8-3a750ddb8eaf", 00:09:16.753 "strip_size_kb": 64, 00:09:16.753 "state": "online", 00:09:16.753 "raid_level": "concat", 00:09:16.753 "superblock": true, 00:09:16.753 "num_base_bdevs": 3, 00:09:16.753 "num_base_bdevs_discovered": 3, 00:09:16.753 "num_base_bdevs_operational": 3, 00:09:16.753 "base_bdevs_list": [ 00:09:16.753 { 00:09:16.753 "name": "BaseBdev1", 00:09:16.753 "uuid": "43a6615a-8b41-5b8a-bc38-7173a3d315c0", 00:09:16.753 "is_configured": true, 00:09:16.753 "data_offset": 2048, 00:09:16.753 "data_size": 63488 00:09:16.753 }, 00:09:16.753 { 00:09:16.753 "name": "BaseBdev2", 00:09:16.753 "uuid": "72893fc9-4932-562e-a6bc-b1b9a3628b33", 00:09:16.753 "is_configured": true, 00:09:16.753 "data_offset": 2048, 00:09:16.753 "data_size": 63488 00:09:16.753 }, 00:09:16.753 { 00:09:16.753 "name": "BaseBdev3", 00:09:16.753 "uuid": "ad6ad34a-9b8b-507b-aadd-a6727df98421", 00:09:16.753 "is_configured": true, 00:09:16.753 "data_offset": 2048, 00:09:16.753 "data_size": 63488 00:09:16.753 } 00:09:16.753 ] 00:09:16.753 }' 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.753 09:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.323 09:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.323 09:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.323 [2024-12-12 09:22:51.198668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.261 "name": "raid_bdev1", 00:09:18.261 "uuid": "2d3ea5fb-2d93-4b20-a6d8-3a750ddb8eaf", 00:09:18.261 "strip_size_kb": 64, 00:09:18.261 "state": "online", 00:09:18.261 "raid_level": "concat", 00:09:18.261 "superblock": true, 00:09:18.261 "num_base_bdevs": 3, 00:09:18.261 "num_base_bdevs_discovered": 3, 00:09:18.261 "num_base_bdevs_operational": 3, 00:09:18.261 "base_bdevs_list": [ 00:09:18.261 { 00:09:18.261 "name": "BaseBdev1", 00:09:18.261 "uuid": "43a6615a-8b41-5b8a-bc38-7173a3d315c0", 00:09:18.261 "is_configured": true, 00:09:18.261 "data_offset": 2048, 00:09:18.261 "data_size": 63488 00:09:18.261 }, 00:09:18.261 { 00:09:18.261 "name": "BaseBdev2", 00:09:18.261 "uuid": "72893fc9-4932-562e-a6bc-b1b9a3628b33", 00:09:18.261 "is_configured": true, 00:09:18.261 "data_offset": 2048, 00:09:18.261 "data_size": 63488 00:09:18.261 }, 00:09:18.261 { 00:09:18.261 "name": "BaseBdev3", 00:09:18.261 "uuid": "ad6ad34a-9b8b-507b-aadd-a6727df98421", 00:09:18.261 "is_configured": true, 00:09:18.261 "data_offset": 2048, 00:09:18.261 "data_size": 63488 00:09:18.261 } 00:09:18.261 ] 00:09:18.261 }' 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.261 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.521 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.521 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.521 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.521 [2024-12-12 09:22:52.539668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.521 [2024-12-12 09:22:52.539724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.521 [2024-12-12 09:22:52.542263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.521 [2024-12-12 09:22:52.542317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.521 [2024-12-12 09:22:52.542358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.521 [2024-12-12 09:22:52.542371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:18.780 { 00:09:18.780 "results": [ 00:09:18.780 { 00:09:18.780 "job": "raid_bdev1", 00:09:18.780 "core_mask": "0x1", 00:09:18.780 "workload": "randrw", 00:09:18.780 "percentage": 50, 00:09:18.780 "status": "finished", 00:09:18.780 "queue_depth": 1, 00:09:18.780 "io_size": 131072, 00:09:18.780 "runtime": 1.341798, 00:09:18.780 "iops": 14002.107619775854, 00:09:18.780 "mibps": 1750.2634524719817, 00:09:18.780 "io_failed": 1, 00:09:18.780 "io_timeout": 0, 00:09:18.780 "avg_latency_us": 100.44185938952945, 00:09:18.780 "min_latency_us": 25.152838427947597, 00:09:18.780 "max_latency_us": 1287.825327510917 00:09:18.780 } 00:09:18.780 ], 00:09:18.780 "core_count": 1 00:09:18.780 } 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68256 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68256 ']' 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68256 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68256 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.780 killing process with pid 68256 00:09:18.780 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68256' 00:09:18.781 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68256 00:09:18.781 [2024-12-12 09:22:52.589376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:18.781 09:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68256 00:09:19.040 [2024-12-12 09:22:52.831601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BGl7JymLdR 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:20.426 00:09:20.426 real 0m4.575s 00:09:20.426 user 0m5.259s 00:09:20.426 sys 0m0.655s 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.426 09:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.426 ************************************ 00:09:20.426 END TEST raid_read_error_test 00:09:20.426 ************************************ 00:09:20.426 09:22:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:20.426 09:22:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.426 09:22:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.426 09:22:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.426 ************************************ 00:09:20.426 START TEST raid_write_error_test 00:09:20.426 ************************************ 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:20.426 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.69qvHq885w 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68398 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68398 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68398 ']' 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.427 09:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.427 [2024-12-12 09:22:54.263353] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:20.427 [2024-12-12 09:22:54.263458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68398 ] 00:09:20.427 [2024-12-12 09:22:54.435279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.692 [2024-12-12 09:22:54.563459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.951 [2024-12-12 09:22:54.796644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.951 [2024-12-12 09:22:54.796709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 BaseBdev1_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 true 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 [2024-12-12 09:22:55.156225] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.211 [2024-12-12 09:22:55.156287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.211 [2024-12-12 09:22:55.156309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:21.211 [2024-12-12 09:22:55.156320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.211 [2024-12-12 09:22:55.158646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.211 [2024-12-12 09:22:55.158684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.211 BaseBdev1 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 BaseBdev2_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 true 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.211 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.211 [2024-12-12 09:22:55.230411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:21.211 [2024-12-12 09:22:55.230466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.211 [2024-12-12 09:22:55.230481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:21.211 [2024-12-12 09:22:55.230491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.211 [2024-12-12 09:22:55.232790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.211 [2024-12-12 09:22:55.232827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:21.470 BaseBdev2 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.470 BaseBdev3_malloc 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.470 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.471 true 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.471 [2024-12-12 09:22:55.331376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:21.471 [2024-12-12 09:22:55.331425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.471 [2024-12-12 09:22:55.331442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:21.471 [2024-12-12 09:22:55.331453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.471 [2024-12-12 09:22:55.333689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.471 [2024-12-12 09:22:55.333725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:21.471 BaseBdev3 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.471 [2024-12-12 09:22:55.343441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.471 [2024-12-12 09:22:55.345400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.471 [2024-12-12 09:22:55.345472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.471 [2024-12-12 09:22:55.345675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:21.471 [2024-12-12 09:22:55.345688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.471 [2024-12-12 09:22:55.345914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:21.471 [2024-12-12 09:22:55.346087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:21.471 [2024-12-12 09:22:55.346104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:21.471 [2024-12-12 09:22:55.346227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.471 "name": "raid_bdev1", 00:09:21.471 "uuid": "79297f09-281e-472e-9961-e711b58e5c55", 00:09:21.471 "strip_size_kb": 64, 00:09:21.471 "state": "online", 00:09:21.471 "raid_level": "concat", 00:09:21.471 "superblock": true, 00:09:21.471 "num_base_bdevs": 3, 00:09:21.471 "num_base_bdevs_discovered": 3, 00:09:21.471 "num_base_bdevs_operational": 3, 00:09:21.471 "base_bdevs_list": [ 00:09:21.471 { 00:09:21.471 "name": "BaseBdev1", 00:09:21.471 "uuid": "6382b1b5-883f-5a56-a94b-a9a030fe60a7", 00:09:21.471 "is_configured": true, 00:09:21.471 "data_offset": 2048, 00:09:21.471 "data_size": 63488 00:09:21.471 }, 00:09:21.471 { 00:09:21.471 "name": "BaseBdev2", 00:09:21.471 "uuid": "3b13d9f0-c730-523b-b376-830a9de30480", 00:09:21.471 "is_configured": true, 00:09:21.471 "data_offset": 2048, 00:09:21.471 "data_size": 63488 00:09:21.471 }, 00:09:21.471 { 00:09:21.471 "name": "BaseBdev3", 00:09:21.471 "uuid": "ce48b110-4ca1-5706-aec2-b3dd0a480d1e", 00:09:21.471 "is_configured": true, 00:09:21.471 "data_offset": 2048, 00:09:21.471 "data_size": 63488 00:09:21.471 } 00:09:21.471 ] 00:09:21.471 }' 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.471 09:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.039 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.039 09:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.039 [2024-12-12 09:22:55.887718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.977 "name": "raid_bdev1", 00:09:22.977 "uuid": "79297f09-281e-472e-9961-e711b58e5c55", 00:09:22.977 "strip_size_kb": 64, 00:09:22.977 "state": "online", 00:09:22.977 "raid_level": "concat", 00:09:22.977 "superblock": true, 00:09:22.977 "num_base_bdevs": 3, 00:09:22.977 "num_base_bdevs_discovered": 3, 00:09:22.977 "num_base_bdevs_operational": 3, 00:09:22.977 "base_bdevs_list": [ 00:09:22.977 { 00:09:22.977 "name": "BaseBdev1", 00:09:22.977 "uuid": "6382b1b5-883f-5a56-a94b-a9a030fe60a7", 00:09:22.977 "is_configured": true, 00:09:22.977 "data_offset": 2048, 00:09:22.977 "data_size": 63488 00:09:22.977 }, 00:09:22.977 { 00:09:22.977 "name": "BaseBdev2", 00:09:22.977 "uuid": "3b13d9f0-c730-523b-b376-830a9de30480", 00:09:22.977 "is_configured": true, 00:09:22.977 "data_offset": 2048, 00:09:22.977 "data_size": 63488 00:09:22.977 }, 00:09:22.977 { 00:09:22.977 "name": "BaseBdev3", 00:09:22.977 "uuid": "ce48b110-4ca1-5706-aec2-b3dd0a480d1e", 00:09:22.977 "is_configured": true, 00:09:22.977 "data_offset": 2048, 00:09:22.977 "data_size": 63488 00:09:22.977 } 00:09:22.977 ] 00:09:22.977 }' 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.977 09:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.237 09:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.237 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.237 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.237 [2024-12-12 09:22:57.252177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.237 [2024-12-12 09:22:57.252226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.237 [2024-12-12 09:22:57.254914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.237 [2024-12-12 09:22:57.254974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.237 [2024-12-12 09:22:57.255019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.237 [2024-12-12 09:22:57.255029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:23.237 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.237 09:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68398 00:09:23.237 { 00:09:23.237 "results": [ 00:09:23.237 { 00:09:23.237 "job": "raid_bdev1", 00:09:23.237 "core_mask": "0x1", 00:09:23.238 "workload": "randrw", 00:09:23.238 "percentage": 50, 00:09:23.238 "status": "finished", 00:09:23.238 "queue_depth": 1, 00:09:23.238 "io_size": 131072, 00:09:23.238 "runtime": 1.365162, 00:09:23.238 "iops": 14040.824458928684, 00:09:23.238 "mibps": 1755.1030573660855, 00:09:23.238 "io_failed": 1, 00:09:23.238 "io_timeout": 0, 00:09:23.238 "avg_latency_us": 100.10757379602848, 00:09:23.238 "min_latency_us": 25.2646288209607, 00:09:23.238 "max_latency_us": 1294.9799126637554 00:09:23.238 } 00:09:23.238 ], 00:09:23.238 "core_count": 1 00:09:23.238 } 00:09:23.238 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68398 ']' 00:09:23.238 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68398 00:09:23.238 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68398 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.497 killing process with pid 68398 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68398' 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68398 00:09:23.497 [2024-12-12 09:22:57.298263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.497 09:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68398 00:09:23.756 [2024-12-12 09:22:57.538652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.69qvHq885w 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:25.136 00:09:25.136 real 0m4.633s 00:09:25.136 user 0m5.370s 00:09:25.136 sys 0m0.656s 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.136 09:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.136 ************************************ 00:09:25.136 END TEST raid_write_error_test 00:09:25.136 ************************************ 00:09:25.136 09:22:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:25.136 09:22:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:25.136 09:22:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.136 09:22:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.136 09:22:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.136 ************************************ 00:09:25.136 START TEST raid_state_function_test 00:09:25.136 ************************************ 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:25.136 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68542 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.137 Process raid pid: 68542 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68542' 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68542 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 68542 ']' 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.137 09:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.137 [2024-12-12 09:22:58.976331] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:25.137 [2024-12-12 09:22:58.976448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.137 [2024-12-12 09:22:59.148423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.396 [2024-12-12 09:22:59.277323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.656 [2024-12-12 09:22:59.509935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.656 [2024-12-12 09:22:59.509983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.915 [2024-12-12 09:22:59.793221] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.915 [2024-12-12 09:22:59.793277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.915 [2024-12-12 09:22:59.793287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.915 [2024-12-12 09:22:59.793298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.915 [2024-12-12 09:22:59.793304] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.915 [2024-12-12 09:22:59.793314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.915 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.916 "name": "Existed_Raid", 00:09:25.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.916 "strip_size_kb": 0, 00:09:25.916 "state": "configuring", 00:09:25.916 "raid_level": "raid1", 00:09:25.916 "superblock": false, 00:09:25.916 "num_base_bdevs": 3, 00:09:25.916 "num_base_bdevs_discovered": 0, 00:09:25.916 "num_base_bdevs_operational": 3, 00:09:25.916 "base_bdevs_list": [ 00:09:25.916 { 00:09:25.916 "name": "BaseBdev1", 00:09:25.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.916 "is_configured": false, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 0 00:09:25.916 }, 00:09:25.916 { 00:09:25.916 "name": "BaseBdev2", 00:09:25.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.916 "is_configured": false, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 0 00:09:25.916 }, 00:09:25.916 { 00:09:25.916 "name": "BaseBdev3", 00:09:25.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.916 "is_configured": false, 00:09:25.916 "data_offset": 0, 00:09:25.916 "data_size": 0 00:09:25.916 } 00:09:25.916 ] 00:09:25.916 }' 00:09:25.916 09:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.916 09:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 [2024-12-12 09:23:00.240402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.485 [2024-12-12 09:23:00.240441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 [2024-12-12 09:23:00.252379] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.485 [2024-12-12 09:23:00.252421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.485 [2024-12-12 09:23:00.252430] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.485 [2024-12-12 09:23:00.252440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.485 [2024-12-12 09:23:00.252446] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.485 [2024-12-12 09:23:00.252455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 [2024-12-12 09:23:00.306622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.485 BaseBdev1 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.485 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.485 [ 00:09:26.485 { 00:09:26.485 "name": "BaseBdev1", 00:09:26.485 "aliases": [ 00:09:26.485 "3118bee7-0c25-4ed4-a311-a86757559329" 00:09:26.485 ], 00:09:26.485 "product_name": "Malloc disk", 00:09:26.485 "block_size": 512, 00:09:26.485 "num_blocks": 65536, 00:09:26.485 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:26.485 "assigned_rate_limits": { 00:09:26.485 "rw_ios_per_sec": 0, 00:09:26.485 "rw_mbytes_per_sec": 0, 00:09:26.485 "r_mbytes_per_sec": 0, 00:09:26.485 "w_mbytes_per_sec": 0 00:09:26.485 }, 00:09:26.485 "claimed": true, 00:09:26.486 "claim_type": "exclusive_write", 00:09:26.486 "zoned": false, 00:09:26.486 "supported_io_types": { 00:09:26.486 "read": true, 00:09:26.486 "write": true, 00:09:26.486 "unmap": true, 00:09:26.486 "flush": true, 00:09:26.486 "reset": true, 00:09:26.486 "nvme_admin": false, 00:09:26.486 "nvme_io": false, 00:09:26.486 "nvme_io_md": false, 00:09:26.486 "write_zeroes": true, 00:09:26.486 "zcopy": true, 00:09:26.486 "get_zone_info": false, 00:09:26.486 "zone_management": false, 00:09:26.486 "zone_append": false, 00:09:26.486 "compare": false, 00:09:26.486 "compare_and_write": false, 00:09:26.486 "abort": true, 00:09:26.486 "seek_hole": false, 00:09:26.486 "seek_data": false, 00:09:26.486 "copy": true, 00:09:26.486 "nvme_iov_md": false 00:09:26.486 }, 00:09:26.486 "memory_domains": [ 00:09:26.486 { 00:09:26.486 "dma_device_id": "system", 00:09:26.486 "dma_device_type": 1 00:09:26.486 }, 00:09:26.486 { 00:09:26.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.486 "dma_device_type": 2 00:09:26.486 } 00:09:26.486 ], 00:09:26.486 "driver_specific": {} 00:09:26.486 } 00:09:26.486 ] 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.486 "name": "Existed_Raid", 00:09:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.486 "strip_size_kb": 0, 00:09:26.486 "state": "configuring", 00:09:26.486 "raid_level": "raid1", 00:09:26.486 "superblock": false, 00:09:26.486 "num_base_bdevs": 3, 00:09:26.486 "num_base_bdevs_discovered": 1, 00:09:26.486 "num_base_bdevs_operational": 3, 00:09:26.486 "base_bdevs_list": [ 00:09:26.486 { 00:09:26.486 "name": "BaseBdev1", 00:09:26.486 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:26.486 "is_configured": true, 00:09:26.486 "data_offset": 0, 00:09:26.486 "data_size": 65536 00:09:26.486 }, 00:09:26.486 { 00:09:26.486 "name": "BaseBdev2", 00:09:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.486 "is_configured": false, 00:09:26.486 "data_offset": 0, 00:09:26.486 "data_size": 0 00:09:26.486 }, 00:09:26.486 { 00:09:26.486 "name": "BaseBdev3", 00:09:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.486 "is_configured": false, 00:09:26.486 "data_offset": 0, 00:09:26.486 "data_size": 0 00:09:26.486 } 00:09:26.486 ] 00:09:26.486 }' 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.486 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.056 [2024-12-12 09:23:00.809762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.056 [2024-12-12 09:23:00.809802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.056 [2024-12-12 09:23:00.817802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.056 [2024-12-12 09:23:00.819801] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.056 [2024-12-12 09:23:00.819837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.056 [2024-12-12 09:23:00.819846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.056 [2024-12-12 09:23:00.819854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.056 "name": "Existed_Raid", 00:09:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.056 "strip_size_kb": 0, 00:09:27.056 "state": "configuring", 00:09:27.056 "raid_level": "raid1", 00:09:27.056 "superblock": false, 00:09:27.056 "num_base_bdevs": 3, 00:09:27.056 "num_base_bdevs_discovered": 1, 00:09:27.056 "num_base_bdevs_operational": 3, 00:09:27.056 "base_bdevs_list": [ 00:09:27.056 { 00:09:27.056 "name": "BaseBdev1", 00:09:27.056 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:27.056 "is_configured": true, 00:09:27.056 "data_offset": 0, 00:09:27.056 "data_size": 65536 00:09:27.056 }, 00:09:27.056 { 00:09:27.056 "name": "BaseBdev2", 00:09:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.056 "is_configured": false, 00:09:27.056 "data_offset": 0, 00:09:27.056 "data_size": 0 00:09:27.056 }, 00:09:27.056 { 00:09:27.056 "name": "BaseBdev3", 00:09:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.056 "is_configured": false, 00:09:27.056 "data_offset": 0, 00:09:27.056 "data_size": 0 00:09:27.056 } 00:09:27.056 ] 00:09:27.056 }' 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.056 09:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.316 [2024-12-12 09:23:01.312814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.316 BaseBdev2 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.316 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.316 [ 00:09:27.316 { 00:09:27.316 "name": "BaseBdev2", 00:09:27.316 "aliases": [ 00:09:27.316 "16838a92-43ae-49ee-b6d0-985349b8ca7a" 00:09:27.316 ], 00:09:27.316 "product_name": "Malloc disk", 00:09:27.316 "block_size": 512, 00:09:27.316 "num_blocks": 65536, 00:09:27.576 "uuid": "16838a92-43ae-49ee-b6d0-985349b8ca7a", 00:09:27.576 "assigned_rate_limits": { 00:09:27.576 "rw_ios_per_sec": 0, 00:09:27.576 "rw_mbytes_per_sec": 0, 00:09:27.576 "r_mbytes_per_sec": 0, 00:09:27.576 "w_mbytes_per_sec": 0 00:09:27.576 }, 00:09:27.576 "claimed": true, 00:09:27.576 "claim_type": "exclusive_write", 00:09:27.576 "zoned": false, 00:09:27.576 "supported_io_types": { 00:09:27.576 "read": true, 00:09:27.576 "write": true, 00:09:27.576 "unmap": true, 00:09:27.576 "flush": true, 00:09:27.576 "reset": true, 00:09:27.576 "nvme_admin": false, 00:09:27.576 "nvme_io": false, 00:09:27.576 "nvme_io_md": false, 00:09:27.576 "write_zeroes": true, 00:09:27.576 "zcopy": true, 00:09:27.576 "get_zone_info": false, 00:09:27.576 "zone_management": false, 00:09:27.576 "zone_append": false, 00:09:27.576 "compare": false, 00:09:27.576 "compare_and_write": false, 00:09:27.576 "abort": true, 00:09:27.576 "seek_hole": false, 00:09:27.576 "seek_data": false, 00:09:27.576 "copy": true, 00:09:27.576 "nvme_iov_md": false 00:09:27.576 }, 00:09:27.576 "memory_domains": [ 00:09:27.576 { 00:09:27.576 "dma_device_id": "system", 00:09:27.576 "dma_device_type": 1 00:09:27.576 }, 00:09:27.576 { 00:09:27.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.576 "dma_device_type": 2 00:09:27.576 } 00:09:27.576 ], 00:09:27.576 "driver_specific": {} 00:09:27.576 } 00:09:27.576 ] 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.576 "name": "Existed_Raid", 00:09:27.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.576 "strip_size_kb": 0, 00:09:27.576 "state": "configuring", 00:09:27.576 "raid_level": "raid1", 00:09:27.576 "superblock": false, 00:09:27.576 "num_base_bdevs": 3, 00:09:27.576 "num_base_bdevs_discovered": 2, 00:09:27.576 "num_base_bdevs_operational": 3, 00:09:27.576 "base_bdevs_list": [ 00:09:27.576 { 00:09:27.576 "name": "BaseBdev1", 00:09:27.576 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:27.576 "is_configured": true, 00:09:27.576 "data_offset": 0, 00:09:27.576 "data_size": 65536 00:09:27.576 }, 00:09:27.576 { 00:09:27.576 "name": "BaseBdev2", 00:09:27.576 "uuid": "16838a92-43ae-49ee-b6d0-985349b8ca7a", 00:09:27.576 "is_configured": true, 00:09:27.576 "data_offset": 0, 00:09:27.576 "data_size": 65536 00:09:27.576 }, 00:09:27.576 { 00:09:27.576 "name": "BaseBdev3", 00:09:27.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.576 "is_configured": false, 00:09:27.576 "data_offset": 0, 00:09:27.576 "data_size": 0 00:09:27.576 } 00:09:27.576 ] 00:09:27.576 }' 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.576 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.836 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.836 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.836 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 [2024-12-12 09:23:01.878044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.096 [2024-12-12 09:23:01.878097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.096 [2024-12-12 09:23:01.878112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.096 [2024-12-12 09:23:01.878582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:28.096 [2024-12-12 09:23:01.878786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.096 [2024-12-12 09:23:01.878802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:28.096 [2024-12-12 09:23:01.879077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.096 BaseBdev3 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 [ 00:09:28.096 { 00:09:28.096 "name": "BaseBdev3", 00:09:28.096 "aliases": [ 00:09:28.096 "f1b8b752-857a-42b5-bcca-acbcafe8aed8" 00:09:28.096 ], 00:09:28.096 "product_name": "Malloc disk", 00:09:28.096 "block_size": 512, 00:09:28.096 "num_blocks": 65536, 00:09:28.096 "uuid": "f1b8b752-857a-42b5-bcca-acbcafe8aed8", 00:09:28.096 "assigned_rate_limits": { 00:09:28.096 "rw_ios_per_sec": 0, 00:09:28.096 "rw_mbytes_per_sec": 0, 00:09:28.096 "r_mbytes_per_sec": 0, 00:09:28.096 "w_mbytes_per_sec": 0 00:09:28.096 }, 00:09:28.096 "claimed": true, 00:09:28.096 "claim_type": "exclusive_write", 00:09:28.096 "zoned": false, 00:09:28.096 "supported_io_types": { 00:09:28.096 "read": true, 00:09:28.096 "write": true, 00:09:28.096 "unmap": true, 00:09:28.096 "flush": true, 00:09:28.096 "reset": true, 00:09:28.096 "nvme_admin": false, 00:09:28.096 "nvme_io": false, 00:09:28.096 "nvme_io_md": false, 00:09:28.096 "write_zeroes": true, 00:09:28.096 "zcopy": true, 00:09:28.096 "get_zone_info": false, 00:09:28.096 "zone_management": false, 00:09:28.096 "zone_append": false, 00:09:28.096 "compare": false, 00:09:28.096 "compare_and_write": false, 00:09:28.096 "abort": true, 00:09:28.096 "seek_hole": false, 00:09:28.096 "seek_data": false, 00:09:28.096 "copy": true, 00:09:28.096 "nvme_iov_md": false 00:09:28.096 }, 00:09:28.096 "memory_domains": [ 00:09:28.096 { 00:09:28.096 "dma_device_id": "system", 00:09:28.096 "dma_device_type": 1 00:09:28.096 }, 00:09:28.096 { 00:09:28.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.096 "dma_device_type": 2 00:09:28.096 } 00:09:28.096 ], 00:09:28.096 "driver_specific": {} 00:09:28.096 } 00:09:28.096 ] 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.096 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.097 "name": "Existed_Raid", 00:09:28.097 "uuid": "6600d85b-5344-4ef0-b674-524276f3c7f9", 00:09:28.097 "strip_size_kb": 0, 00:09:28.097 "state": "online", 00:09:28.097 "raid_level": "raid1", 00:09:28.097 "superblock": false, 00:09:28.097 "num_base_bdevs": 3, 00:09:28.097 "num_base_bdevs_discovered": 3, 00:09:28.097 "num_base_bdevs_operational": 3, 00:09:28.097 "base_bdevs_list": [ 00:09:28.097 { 00:09:28.097 "name": "BaseBdev1", 00:09:28.097 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:28.097 "is_configured": true, 00:09:28.097 "data_offset": 0, 00:09:28.097 "data_size": 65536 00:09:28.097 }, 00:09:28.097 { 00:09:28.097 "name": "BaseBdev2", 00:09:28.097 "uuid": "16838a92-43ae-49ee-b6d0-985349b8ca7a", 00:09:28.097 "is_configured": true, 00:09:28.097 "data_offset": 0, 00:09:28.097 "data_size": 65536 00:09:28.097 }, 00:09:28.097 { 00:09:28.097 "name": "BaseBdev3", 00:09:28.097 "uuid": "f1b8b752-857a-42b5-bcca-acbcafe8aed8", 00:09:28.097 "is_configured": true, 00:09:28.097 "data_offset": 0, 00:09:28.097 "data_size": 65536 00:09:28.097 } 00:09:28.097 ] 00:09:28.097 }' 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.097 09:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.357 [2024-12-12 09:23:02.329626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.357 "name": "Existed_Raid", 00:09:28.357 "aliases": [ 00:09:28.357 "6600d85b-5344-4ef0-b674-524276f3c7f9" 00:09:28.357 ], 00:09:28.357 "product_name": "Raid Volume", 00:09:28.357 "block_size": 512, 00:09:28.357 "num_blocks": 65536, 00:09:28.357 "uuid": "6600d85b-5344-4ef0-b674-524276f3c7f9", 00:09:28.357 "assigned_rate_limits": { 00:09:28.357 "rw_ios_per_sec": 0, 00:09:28.357 "rw_mbytes_per_sec": 0, 00:09:28.357 "r_mbytes_per_sec": 0, 00:09:28.357 "w_mbytes_per_sec": 0 00:09:28.357 }, 00:09:28.357 "claimed": false, 00:09:28.357 "zoned": false, 00:09:28.357 "supported_io_types": { 00:09:28.357 "read": true, 00:09:28.357 "write": true, 00:09:28.357 "unmap": false, 00:09:28.357 "flush": false, 00:09:28.357 "reset": true, 00:09:28.357 "nvme_admin": false, 00:09:28.357 "nvme_io": false, 00:09:28.357 "nvme_io_md": false, 00:09:28.357 "write_zeroes": true, 00:09:28.357 "zcopy": false, 00:09:28.357 "get_zone_info": false, 00:09:28.357 "zone_management": false, 00:09:28.357 "zone_append": false, 00:09:28.357 "compare": false, 00:09:28.357 "compare_and_write": false, 00:09:28.357 "abort": false, 00:09:28.357 "seek_hole": false, 00:09:28.357 "seek_data": false, 00:09:28.357 "copy": false, 00:09:28.357 "nvme_iov_md": false 00:09:28.357 }, 00:09:28.357 "memory_domains": [ 00:09:28.357 { 00:09:28.357 "dma_device_id": "system", 00:09:28.357 "dma_device_type": 1 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.357 "dma_device_type": 2 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "dma_device_id": "system", 00:09:28.357 "dma_device_type": 1 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.357 "dma_device_type": 2 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "dma_device_id": "system", 00:09:28.357 "dma_device_type": 1 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.357 "dma_device_type": 2 00:09:28.357 } 00:09:28.357 ], 00:09:28.357 "driver_specific": { 00:09:28.357 "raid": { 00:09:28.357 "uuid": "6600d85b-5344-4ef0-b674-524276f3c7f9", 00:09:28.357 "strip_size_kb": 0, 00:09:28.357 "state": "online", 00:09:28.357 "raid_level": "raid1", 00:09:28.357 "superblock": false, 00:09:28.357 "num_base_bdevs": 3, 00:09:28.357 "num_base_bdevs_discovered": 3, 00:09:28.357 "num_base_bdevs_operational": 3, 00:09:28.357 "base_bdevs_list": [ 00:09:28.357 { 00:09:28.357 "name": "BaseBdev1", 00:09:28.357 "uuid": "3118bee7-0c25-4ed4-a311-a86757559329", 00:09:28.357 "is_configured": true, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 65536 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "name": "BaseBdev2", 00:09:28.357 "uuid": "16838a92-43ae-49ee-b6d0-985349b8ca7a", 00:09:28.357 "is_configured": true, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 65536 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "name": "BaseBdev3", 00:09:28.357 "uuid": "f1b8b752-857a-42b5-bcca-acbcafe8aed8", 00:09:28.357 "is_configured": true, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 65536 00:09:28.357 } 00:09:28.357 ] 00:09:28.357 } 00:09:28.357 } 00:09:28.357 }' 00:09:28.357 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:28.617 BaseBdev2 00:09:28.617 BaseBdev3' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.617 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 [2024-12-12 09:23:02.569025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.877 "name": "Existed_Raid", 00:09:28.877 "uuid": "6600d85b-5344-4ef0-b674-524276f3c7f9", 00:09:28.877 "strip_size_kb": 0, 00:09:28.877 "state": "online", 00:09:28.877 "raid_level": "raid1", 00:09:28.877 "superblock": false, 00:09:28.877 "num_base_bdevs": 3, 00:09:28.877 "num_base_bdevs_discovered": 2, 00:09:28.877 "num_base_bdevs_operational": 2, 00:09:28.877 "base_bdevs_list": [ 00:09:28.877 { 00:09:28.877 "name": null, 00:09:28.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.877 "is_configured": false, 00:09:28.877 "data_offset": 0, 00:09:28.877 "data_size": 65536 00:09:28.877 }, 00:09:28.877 { 00:09:28.877 "name": "BaseBdev2", 00:09:28.877 "uuid": "16838a92-43ae-49ee-b6d0-985349b8ca7a", 00:09:28.877 "is_configured": true, 00:09:28.877 "data_offset": 0, 00:09:28.877 "data_size": 65536 00:09:28.877 }, 00:09:28.877 { 00:09:28.877 "name": "BaseBdev3", 00:09:28.877 "uuid": "f1b8b752-857a-42b5-bcca-acbcafe8aed8", 00:09:28.877 "is_configured": true, 00:09:28.877 "data_offset": 0, 00:09:28.877 "data_size": 65536 00:09:28.877 } 00:09:28.877 ] 00:09:28.877 }' 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.877 09:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.137 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.137 [2024-12-12 09:23:03.101427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.397 [2024-12-12 09:23:03.257363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.397 [2024-12-12 09:23:03.257488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.397 [2024-12-12 09:23:03.359356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.397 [2024-12-12 09:23:03.359422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.397 [2024-12-12 09:23:03.359437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.397 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 BaseBdev2 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 [ 00:09:29.657 { 00:09:29.657 "name": "BaseBdev2", 00:09:29.657 "aliases": [ 00:09:29.657 "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58" 00:09:29.657 ], 00:09:29.657 "product_name": "Malloc disk", 00:09:29.657 "block_size": 512, 00:09:29.657 "num_blocks": 65536, 00:09:29.657 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:29.657 "assigned_rate_limits": { 00:09:29.657 "rw_ios_per_sec": 0, 00:09:29.657 "rw_mbytes_per_sec": 0, 00:09:29.657 "r_mbytes_per_sec": 0, 00:09:29.657 "w_mbytes_per_sec": 0 00:09:29.657 }, 00:09:29.657 "claimed": false, 00:09:29.657 "zoned": false, 00:09:29.657 "supported_io_types": { 00:09:29.657 "read": true, 00:09:29.657 "write": true, 00:09:29.657 "unmap": true, 00:09:29.657 "flush": true, 00:09:29.657 "reset": true, 00:09:29.657 "nvme_admin": false, 00:09:29.657 "nvme_io": false, 00:09:29.657 "nvme_io_md": false, 00:09:29.657 "write_zeroes": true, 00:09:29.657 "zcopy": true, 00:09:29.657 "get_zone_info": false, 00:09:29.657 "zone_management": false, 00:09:29.657 "zone_append": false, 00:09:29.657 "compare": false, 00:09:29.657 "compare_and_write": false, 00:09:29.657 "abort": true, 00:09:29.657 "seek_hole": false, 00:09:29.657 "seek_data": false, 00:09:29.657 "copy": true, 00:09:29.657 "nvme_iov_md": false 00:09:29.657 }, 00:09:29.657 "memory_domains": [ 00:09:29.657 { 00:09:29.657 "dma_device_id": "system", 00:09:29.657 "dma_device_type": 1 00:09:29.657 }, 00:09:29.657 { 00:09:29.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.657 "dma_device_type": 2 00:09:29.657 } 00:09:29.657 ], 00:09:29.657 "driver_specific": {} 00:09:29.657 } 00:09:29.657 ] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 BaseBdev3 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.657 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.657 [ 00:09:29.657 { 00:09:29.657 "name": "BaseBdev3", 00:09:29.657 "aliases": [ 00:09:29.657 "c2d96faf-a275-416f-a607-3c751a3c8e9d" 00:09:29.657 ], 00:09:29.657 "product_name": "Malloc disk", 00:09:29.657 "block_size": 512, 00:09:29.657 "num_blocks": 65536, 00:09:29.657 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:29.657 "assigned_rate_limits": { 00:09:29.657 "rw_ios_per_sec": 0, 00:09:29.657 "rw_mbytes_per_sec": 0, 00:09:29.657 "r_mbytes_per_sec": 0, 00:09:29.658 "w_mbytes_per_sec": 0 00:09:29.658 }, 00:09:29.658 "claimed": false, 00:09:29.658 "zoned": false, 00:09:29.658 "supported_io_types": { 00:09:29.658 "read": true, 00:09:29.658 "write": true, 00:09:29.658 "unmap": true, 00:09:29.658 "flush": true, 00:09:29.658 "reset": true, 00:09:29.658 "nvme_admin": false, 00:09:29.658 "nvme_io": false, 00:09:29.658 "nvme_io_md": false, 00:09:29.658 "write_zeroes": true, 00:09:29.658 "zcopy": true, 00:09:29.658 "get_zone_info": false, 00:09:29.658 "zone_management": false, 00:09:29.658 "zone_append": false, 00:09:29.658 "compare": false, 00:09:29.658 "compare_and_write": false, 00:09:29.658 "abort": true, 00:09:29.658 "seek_hole": false, 00:09:29.658 "seek_data": false, 00:09:29.658 "copy": true, 00:09:29.658 "nvme_iov_md": false 00:09:29.658 }, 00:09:29.658 "memory_domains": [ 00:09:29.658 { 00:09:29.658 "dma_device_id": "system", 00:09:29.658 "dma_device_type": 1 00:09:29.658 }, 00:09:29.658 { 00:09:29.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.658 "dma_device_type": 2 00:09:29.658 } 00:09:29.658 ], 00:09:29.658 "driver_specific": {} 00:09:29.658 } 00:09:29.658 ] 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.658 [2024-12-12 09:23:03.574091] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.658 [2024-12-12 09:23:03.574230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.658 [2024-12-12 09:23:03.574269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.658 [2024-12-12 09:23:03.576293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.658 "name": "Existed_Raid", 00:09:29.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.658 "strip_size_kb": 0, 00:09:29.658 "state": "configuring", 00:09:29.658 "raid_level": "raid1", 00:09:29.658 "superblock": false, 00:09:29.658 "num_base_bdevs": 3, 00:09:29.658 "num_base_bdevs_discovered": 2, 00:09:29.658 "num_base_bdevs_operational": 3, 00:09:29.658 "base_bdevs_list": [ 00:09:29.658 { 00:09:29.658 "name": "BaseBdev1", 00:09:29.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.658 "is_configured": false, 00:09:29.658 "data_offset": 0, 00:09:29.658 "data_size": 0 00:09:29.658 }, 00:09:29.658 { 00:09:29.658 "name": "BaseBdev2", 00:09:29.658 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:29.658 "is_configured": true, 00:09:29.658 "data_offset": 0, 00:09:29.658 "data_size": 65536 00:09:29.658 }, 00:09:29.658 { 00:09:29.658 "name": "BaseBdev3", 00:09:29.658 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:29.658 "is_configured": true, 00:09:29.658 "data_offset": 0, 00:09:29.658 "data_size": 65536 00:09:29.658 } 00:09:29.658 ] 00:09:29.658 }' 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.658 09:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 [2024-12-12 09:23:04.037331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.228 "name": "Existed_Raid", 00:09:30.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.228 "strip_size_kb": 0, 00:09:30.228 "state": "configuring", 00:09:30.228 "raid_level": "raid1", 00:09:30.228 "superblock": false, 00:09:30.228 "num_base_bdevs": 3, 00:09:30.228 "num_base_bdevs_discovered": 1, 00:09:30.228 "num_base_bdevs_operational": 3, 00:09:30.228 "base_bdevs_list": [ 00:09:30.228 { 00:09:30.228 "name": "BaseBdev1", 00:09:30.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.228 "is_configured": false, 00:09:30.228 "data_offset": 0, 00:09:30.228 "data_size": 0 00:09:30.228 }, 00:09:30.228 { 00:09:30.228 "name": null, 00:09:30.228 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:30.228 "is_configured": false, 00:09:30.228 "data_offset": 0, 00:09:30.228 "data_size": 65536 00:09:30.228 }, 00:09:30.228 { 00:09:30.228 "name": "BaseBdev3", 00:09:30.228 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:30.228 "is_configured": true, 00:09:30.228 "data_offset": 0, 00:09:30.228 "data_size": 65536 00:09:30.228 } 00:09:30.228 ] 00:09:30.228 }' 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.228 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.488 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.488 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.488 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.488 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.488 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.747 [2024-12-12 09:23:04.556792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.747 BaseBdev1 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.747 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.747 [ 00:09:30.747 { 00:09:30.747 "name": "BaseBdev1", 00:09:30.747 "aliases": [ 00:09:30.747 "9c952a65-21c3-43c2-b5c5-73cb24ea22e0" 00:09:30.747 ], 00:09:30.747 "product_name": "Malloc disk", 00:09:30.747 "block_size": 512, 00:09:30.747 "num_blocks": 65536, 00:09:30.747 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:30.747 "assigned_rate_limits": { 00:09:30.747 "rw_ios_per_sec": 0, 00:09:30.747 "rw_mbytes_per_sec": 0, 00:09:30.748 "r_mbytes_per_sec": 0, 00:09:30.748 "w_mbytes_per_sec": 0 00:09:30.748 }, 00:09:30.748 "claimed": true, 00:09:30.748 "claim_type": "exclusive_write", 00:09:30.748 "zoned": false, 00:09:30.748 "supported_io_types": { 00:09:30.748 "read": true, 00:09:30.748 "write": true, 00:09:30.748 "unmap": true, 00:09:30.748 "flush": true, 00:09:30.748 "reset": true, 00:09:30.748 "nvme_admin": false, 00:09:30.748 "nvme_io": false, 00:09:30.748 "nvme_io_md": false, 00:09:30.748 "write_zeroes": true, 00:09:30.748 "zcopy": true, 00:09:30.748 "get_zone_info": false, 00:09:30.748 "zone_management": false, 00:09:30.748 "zone_append": false, 00:09:30.748 "compare": false, 00:09:30.748 "compare_and_write": false, 00:09:30.748 "abort": true, 00:09:30.748 "seek_hole": false, 00:09:30.748 "seek_data": false, 00:09:30.748 "copy": true, 00:09:30.748 "nvme_iov_md": false 00:09:30.748 }, 00:09:30.748 "memory_domains": [ 00:09:30.748 { 00:09:30.748 "dma_device_id": "system", 00:09:30.748 "dma_device_type": 1 00:09:30.748 }, 00:09:30.748 { 00:09:30.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.748 "dma_device_type": 2 00:09:30.748 } 00:09:30.748 ], 00:09:30.748 "driver_specific": {} 00:09:30.748 } 00:09:30.748 ] 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.748 "name": "Existed_Raid", 00:09:30.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.748 "strip_size_kb": 0, 00:09:30.748 "state": "configuring", 00:09:30.748 "raid_level": "raid1", 00:09:30.748 "superblock": false, 00:09:30.748 "num_base_bdevs": 3, 00:09:30.748 "num_base_bdevs_discovered": 2, 00:09:30.748 "num_base_bdevs_operational": 3, 00:09:30.748 "base_bdevs_list": [ 00:09:30.748 { 00:09:30.748 "name": "BaseBdev1", 00:09:30.748 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:30.748 "is_configured": true, 00:09:30.748 "data_offset": 0, 00:09:30.748 "data_size": 65536 00:09:30.748 }, 00:09:30.748 { 00:09:30.748 "name": null, 00:09:30.748 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:30.748 "is_configured": false, 00:09:30.748 "data_offset": 0, 00:09:30.748 "data_size": 65536 00:09:30.748 }, 00:09:30.748 { 00:09:30.748 "name": "BaseBdev3", 00:09:30.748 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:30.748 "is_configured": true, 00:09:30.748 "data_offset": 0, 00:09:30.748 "data_size": 65536 00:09:30.748 } 00:09:30.748 ] 00:09:30.748 }' 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.748 09:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.007 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.007 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.007 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.007 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.007 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 [2024-12-12 09:23:05.063976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.266 "name": "Existed_Raid", 00:09:31.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.266 "strip_size_kb": 0, 00:09:31.266 "state": "configuring", 00:09:31.266 "raid_level": "raid1", 00:09:31.266 "superblock": false, 00:09:31.266 "num_base_bdevs": 3, 00:09:31.266 "num_base_bdevs_discovered": 1, 00:09:31.266 "num_base_bdevs_operational": 3, 00:09:31.266 "base_bdevs_list": [ 00:09:31.266 { 00:09:31.266 "name": "BaseBdev1", 00:09:31.266 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:31.266 "is_configured": true, 00:09:31.266 "data_offset": 0, 00:09:31.266 "data_size": 65536 00:09:31.266 }, 00:09:31.266 { 00:09:31.266 "name": null, 00:09:31.266 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:31.266 "is_configured": false, 00:09:31.266 "data_offset": 0, 00:09:31.266 "data_size": 65536 00:09:31.266 }, 00:09:31.266 { 00:09:31.266 "name": null, 00:09:31.266 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:31.266 "is_configured": false, 00:09:31.266 "data_offset": 0, 00:09:31.266 "data_size": 65536 00:09:31.266 } 00:09:31.266 ] 00:09:31.266 }' 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.266 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.525 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.525 [2024-12-12 09:23:05.543222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.784 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.784 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.785 "name": "Existed_Raid", 00:09:31.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.785 "strip_size_kb": 0, 00:09:31.785 "state": "configuring", 00:09:31.785 "raid_level": "raid1", 00:09:31.785 "superblock": false, 00:09:31.785 "num_base_bdevs": 3, 00:09:31.785 "num_base_bdevs_discovered": 2, 00:09:31.785 "num_base_bdevs_operational": 3, 00:09:31.785 "base_bdevs_list": [ 00:09:31.785 { 00:09:31.785 "name": "BaseBdev1", 00:09:31.785 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:31.785 "is_configured": true, 00:09:31.785 "data_offset": 0, 00:09:31.785 "data_size": 65536 00:09:31.785 }, 00:09:31.785 { 00:09:31.785 "name": null, 00:09:31.785 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:31.785 "is_configured": false, 00:09:31.785 "data_offset": 0, 00:09:31.785 "data_size": 65536 00:09:31.785 }, 00:09:31.785 { 00:09:31.785 "name": "BaseBdev3", 00:09:31.785 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:31.785 "is_configured": true, 00:09:31.785 "data_offset": 0, 00:09:31.785 "data_size": 65536 00:09:31.785 } 00:09:31.785 ] 00:09:31.785 }' 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.785 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.044 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.044 [2024-12-12 09:23:06.034461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.303 "name": "Existed_Raid", 00:09:32.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.303 "strip_size_kb": 0, 00:09:32.303 "state": "configuring", 00:09:32.303 "raid_level": "raid1", 00:09:32.303 "superblock": false, 00:09:32.303 "num_base_bdevs": 3, 00:09:32.303 "num_base_bdevs_discovered": 1, 00:09:32.303 "num_base_bdevs_operational": 3, 00:09:32.303 "base_bdevs_list": [ 00:09:32.303 { 00:09:32.303 "name": null, 00:09:32.303 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:32.303 "is_configured": false, 00:09:32.303 "data_offset": 0, 00:09:32.303 "data_size": 65536 00:09:32.303 }, 00:09:32.303 { 00:09:32.303 "name": null, 00:09:32.303 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:32.303 "is_configured": false, 00:09:32.303 "data_offset": 0, 00:09:32.303 "data_size": 65536 00:09:32.303 }, 00:09:32.303 { 00:09:32.303 "name": "BaseBdev3", 00:09:32.303 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:32.303 "is_configured": true, 00:09:32.303 "data_offset": 0, 00:09:32.303 "data_size": 65536 00:09:32.303 } 00:09:32.303 ] 00:09:32.303 }' 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.303 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.563 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.822 [2024-12-12 09:23:06.590805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.822 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.823 "name": "Existed_Raid", 00:09:32.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.823 "strip_size_kb": 0, 00:09:32.823 "state": "configuring", 00:09:32.823 "raid_level": "raid1", 00:09:32.823 "superblock": false, 00:09:32.823 "num_base_bdevs": 3, 00:09:32.823 "num_base_bdevs_discovered": 2, 00:09:32.823 "num_base_bdevs_operational": 3, 00:09:32.823 "base_bdevs_list": [ 00:09:32.823 { 00:09:32.823 "name": null, 00:09:32.823 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:32.823 "is_configured": false, 00:09:32.823 "data_offset": 0, 00:09:32.823 "data_size": 65536 00:09:32.823 }, 00:09:32.823 { 00:09:32.823 "name": "BaseBdev2", 00:09:32.823 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:32.823 "is_configured": true, 00:09:32.823 "data_offset": 0, 00:09:32.823 "data_size": 65536 00:09:32.823 }, 00:09:32.823 { 00:09:32.823 "name": "BaseBdev3", 00:09:32.823 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:32.823 "is_configured": true, 00:09:32.823 "data_offset": 0, 00:09:32.823 "data_size": 65536 00:09:32.823 } 00:09:32.823 ] 00:09:32.823 }' 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.823 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.082 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.082 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.082 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.082 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.083 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.342 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c952a65-21c3-43c2-b5c5-73cb24ea22e0 00:09:33.342 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.342 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.342 [2024-12-12 09:23:07.175936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.342 [2024-12-12 09:23:07.176007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.342 [2024-12-12 09:23:07.176016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:33.342 [2024-12-12 09:23:07.176304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.342 [2024-12-12 09:23:07.176484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.342 [2024-12-12 09:23:07.176497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:33.343 [2024-12-12 09:23:07.176758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.343 NewBaseBdev 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.343 [ 00:09:33.343 { 00:09:33.343 "name": "NewBaseBdev", 00:09:33.343 "aliases": [ 00:09:33.343 "9c952a65-21c3-43c2-b5c5-73cb24ea22e0" 00:09:33.343 ], 00:09:33.343 "product_name": "Malloc disk", 00:09:33.343 "block_size": 512, 00:09:33.343 "num_blocks": 65536, 00:09:33.343 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:33.343 "assigned_rate_limits": { 00:09:33.343 "rw_ios_per_sec": 0, 00:09:33.343 "rw_mbytes_per_sec": 0, 00:09:33.343 "r_mbytes_per_sec": 0, 00:09:33.343 "w_mbytes_per_sec": 0 00:09:33.343 }, 00:09:33.343 "claimed": true, 00:09:33.343 "claim_type": "exclusive_write", 00:09:33.343 "zoned": false, 00:09:33.343 "supported_io_types": { 00:09:33.343 "read": true, 00:09:33.343 "write": true, 00:09:33.343 "unmap": true, 00:09:33.343 "flush": true, 00:09:33.343 "reset": true, 00:09:33.343 "nvme_admin": false, 00:09:33.343 "nvme_io": false, 00:09:33.343 "nvme_io_md": false, 00:09:33.343 "write_zeroes": true, 00:09:33.343 "zcopy": true, 00:09:33.343 "get_zone_info": false, 00:09:33.343 "zone_management": false, 00:09:33.343 "zone_append": false, 00:09:33.343 "compare": false, 00:09:33.343 "compare_and_write": false, 00:09:33.343 "abort": true, 00:09:33.343 "seek_hole": false, 00:09:33.343 "seek_data": false, 00:09:33.343 "copy": true, 00:09:33.343 "nvme_iov_md": false 00:09:33.343 }, 00:09:33.343 "memory_domains": [ 00:09:33.343 { 00:09:33.343 "dma_device_id": "system", 00:09:33.343 "dma_device_type": 1 00:09:33.343 }, 00:09:33.343 { 00:09:33.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.343 "dma_device_type": 2 00:09:33.343 } 00:09:33.343 ], 00:09:33.343 "driver_specific": {} 00:09:33.343 } 00:09:33.343 ] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.343 "name": "Existed_Raid", 00:09:33.343 "uuid": "de576b11-fac0-4f45-992b-10551bea68eb", 00:09:33.343 "strip_size_kb": 0, 00:09:33.343 "state": "online", 00:09:33.343 "raid_level": "raid1", 00:09:33.343 "superblock": false, 00:09:33.343 "num_base_bdevs": 3, 00:09:33.343 "num_base_bdevs_discovered": 3, 00:09:33.343 "num_base_bdevs_operational": 3, 00:09:33.343 "base_bdevs_list": [ 00:09:33.343 { 00:09:33.343 "name": "NewBaseBdev", 00:09:33.343 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:33.343 "is_configured": true, 00:09:33.343 "data_offset": 0, 00:09:33.343 "data_size": 65536 00:09:33.343 }, 00:09:33.343 { 00:09:33.343 "name": "BaseBdev2", 00:09:33.343 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:33.343 "is_configured": true, 00:09:33.343 "data_offset": 0, 00:09:33.343 "data_size": 65536 00:09:33.343 }, 00:09:33.343 { 00:09:33.343 "name": "BaseBdev3", 00:09:33.343 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:33.343 "is_configured": true, 00:09:33.343 "data_offset": 0, 00:09:33.343 "data_size": 65536 00:09:33.343 } 00:09:33.343 ] 00:09:33.343 }' 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.343 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.602 [2024-12-12 09:23:07.603573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.602 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.861 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.861 "name": "Existed_Raid", 00:09:33.861 "aliases": [ 00:09:33.861 "de576b11-fac0-4f45-992b-10551bea68eb" 00:09:33.861 ], 00:09:33.861 "product_name": "Raid Volume", 00:09:33.862 "block_size": 512, 00:09:33.862 "num_blocks": 65536, 00:09:33.862 "uuid": "de576b11-fac0-4f45-992b-10551bea68eb", 00:09:33.862 "assigned_rate_limits": { 00:09:33.862 "rw_ios_per_sec": 0, 00:09:33.862 "rw_mbytes_per_sec": 0, 00:09:33.862 "r_mbytes_per_sec": 0, 00:09:33.862 "w_mbytes_per_sec": 0 00:09:33.862 }, 00:09:33.862 "claimed": false, 00:09:33.862 "zoned": false, 00:09:33.862 "supported_io_types": { 00:09:33.862 "read": true, 00:09:33.862 "write": true, 00:09:33.862 "unmap": false, 00:09:33.862 "flush": false, 00:09:33.862 "reset": true, 00:09:33.862 "nvme_admin": false, 00:09:33.862 "nvme_io": false, 00:09:33.862 "nvme_io_md": false, 00:09:33.862 "write_zeroes": true, 00:09:33.862 "zcopy": false, 00:09:33.862 "get_zone_info": false, 00:09:33.862 "zone_management": false, 00:09:33.862 "zone_append": false, 00:09:33.862 "compare": false, 00:09:33.862 "compare_and_write": false, 00:09:33.862 "abort": false, 00:09:33.862 "seek_hole": false, 00:09:33.862 "seek_data": false, 00:09:33.862 "copy": false, 00:09:33.862 "nvme_iov_md": false 00:09:33.862 }, 00:09:33.862 "memory_domains": [ 00:09:33.862 { 00:09:33.862 "dma_device_id": "system", 00:09:33.862 "dma_device_type": 1 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.862 "dma_device_type": 2 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "dma_device_id": "system", 00:09:33.862 "dma_device_type": 1 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.862 "dma_device_type": 2 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "dma_device_id": "system", 00:09:33.862 "dma_device_type": 1 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.862 "dma_device_type": 2 00:09:33.862 } 00:09:33.862 ], 00:09:33.862 "driver_specific": { 00:09:33.862 "raid": { 00:09:33.862 "uuid": "de576b11-fac0-4f45-992b-10551bea68eb", 00:09:33.862 "strip_size_kb": 0, 00:09:33.862 "state": "online", 00:09:33.862 "raid_level": "raid1", 00:09:33.862 "superblock": false, 00:09:33.862 "num_base_bdevs": 3, 00:09:33.862 "num_base_bdevs_discovered": 3, 00:09:33.862 "num_base_bdevs_operational": 3, 00:09:33.862 "base_bdevs_list": [ 00:09:33.862 { 00:09:33.862 "name": "NewBaseBdev", 00:09:33.862 "uuid": "9c952a65-21c3-43c2-b5c5-73cb24ea22e0", 00:09:33.862 "is_configured": true, 00:09:33.862 "data_offset": 0, 00:09:33.862 "data_size": 65536 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "name": "BaseBdev2", 00:09:33.862 "uuid": "488c9a73-3ac7-47a7-9ac2-42e92d2bfa58", 00:09:33.862 "is_configured": true, 00:09:33.862 "data_offset": 0, 00:09:33.862 "data_size": 65536 00:09:33.862 }, 00:09:33.862 { 00:09:33.862 "name": "BaseBdev3", 00:09:33.862 "uuid": "c2d96faf-a275-416f-a607-3c751a3c8e9d", 00:09:33.862 "is_configured": true, 00:09:33.862 "data_offset": 0, 00:09:33.862 "data_size": 65536 00:09:33.862 } 00:09:33.862 ] 00:09:33.862 } 00:09:33.862 } 00:09:33.862 }' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.862 BaseBdev2 00:09:33.862 BaseBdev3' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.122 [2024-12-12 09:23:07.886799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.122 [2024-12-12 09:23:07.886872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.122 [2024-12-12 09:23:07.886977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.122 [2024-12-12 09:23:07.887289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.122 [2024-12-12 09:23:07.887303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68542 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 68542 ']' 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 68542 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68542 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68542' 00:09:34.122 killing process with pid 68542 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 68542 00:09:34.122 [2024-12-12 09:23:07.937487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.122 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 68542 00:09:34.381 [2024-12-12 09:23:08.250939] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:35.761 00:09:35.761 real 0m10.555s 00:09:35.761 user 0m16.574s 00:09:35.761 sys 0m1.927s 00:09:35.761 ************************************ 00:09:35.761 END TEST raid_state_function_test 00:09:35.761 ************************************ 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.761 09:23:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:35.761 09:23:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.761 09:23:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.761 09:23:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.761 ************************************ 00:09:35.761 START TEST raid_state_function_test_sb 00:09:35.761 ************************************ 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69163 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.761 Process raid pid: 69163 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69163' 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69163 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69163 ']' 00:09:35.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.761 09:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.761 [2024-12-12 09:23:09.585392] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:35.761 [2024-12-12 09:23:09.585507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.761 [2024-12-12 09:23:09.762170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.021 [2024-12-12 09:23:09.890567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.281 [2024-12-12 09:23:10.132976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.281 [2024-12-12 09:23:10.133025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.541 [2024-12-12 09:23:10.409526] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.541 [2024-12-12 09:23:10.409586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.541 [2024-12-12 09:23:10.409596] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.541 [2024-12-12 09:23:10.409606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.541 [2024-12-12 09:23:10.409618] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.541 [2024-12-12 09:23:10.409627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.541 "name": "Existed_Raid", 00:09:36.541 "uuid": "19ddf54b-c0da-49e4-abee-35b69a31f949", 00:09:36.541 "strip_size_kb": 0, 00:09:36.541 "state": "configuring", 00:09:36.541 "raid_level": "raid1", 00:09:36.541 "superblock": true, 00:09:36.541 "num_base_bdevs": 3, 00:09:36.541 "num_base_bdevs_discovered": 0, 00:09:36.541 "num_base_bdevs_operational": 3, 00:09:36.541 "base_bdevs_list": [ 00:09:36.541 { 00:09:36.541 "name": "BaseBdev1", 00:09:36.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.541 "is_configured": false, 00:09:36.541 "data_offset": 0, 00:09:36.541 "data_size": 0 00:09:36.541 }, 00:09:36.541 { 00:09:36.541 "name": "BaseBdev2", 00:09:36.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.541 "is_configured": false, 00:09:36.541 "data_offset": 0, 00:09:36.541 "data_size": 0 00:09:36.541 }, 00:09:36.541 { 00:09:36.541 "name": "BaseBdev3", 00:09:36.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.541 "is_configured": false, 00:09:36.541 "data_offset": 0, 00:09:36.541 "data_size": 0 00:09:36.541 } 00:09:36.541 ] 00:09:36.541 }' 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.541 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.111 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.111 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.111 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.111 [2024-12-12 09:23:10.864772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.111 [2024-12-12 09:23:10.864890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.111 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.111 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.112 [2024-12-12 09:23:10.872725] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.112 [2024-12-12 09:23:10.872776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.112 [2024-12-12 09:23:10.872786] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.112 [2024-12-12 09:23:10.872796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.112 [2024-12-12 09:23:10.872802] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.112 [2024-12-12 09:23:10.872810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.112 [2024-12-12 09:23:10.922251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.112 BaseBdev1 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.112 [ 00:09:37.112 { 00:09:37.112 "name": "BaseBdev1", 00:09:37.112 "aliases": [ 00:09:37.112 "039776e3-267c-41b6-bd8d-895fa125d4bb" 00:09:37.112 ], 00:09:37.112 "product_name": "Malloc disk", 00:09:37.112 "block_size": 512, 00:09:37.112 "num_blocks": 65536, 00:09:37.112 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:37.112 "assigned_rate_limits": { 00:09:37.112 "rw_ios_per_sec": 0, 00:09:37.112 "rw_mbytes_per_sec": 0, 00:09:37.112 "r_mbytes_per_sec": 0, 00:09:37.112 "w_mbytes_per_sec": 0 00:09:37.112 }, 00:09:37.112 "claimed": true, 00:09:37.112 "claim_type": "exclusive_write", 00:09:37.112 "zoned": false, 00:09:37.112 "supported_io_types": { 00:09:37.112 "read": true, 00:09:37.112 "write": true, 00:09:37.112 "unmap": true, 00:09:37.112 "flush": true, 00:09:37.112 "reset": true, 00:09:37.112 "nvme_admin": false, 00:09:37.112 "nvme_io": false, 00:09:37.112 "nvme_io_md": false, 00:09:37.112 "write_zeroes": true, 00:09:37.112 "zcopy": true, 00:09:37.112 "get_zone_info": false, 00:09:37.112 "zone_management": false, 00:09:37.112 "zone_append": false, 00:09:37.112 "compare": false, 00:09:37.112 "compare_and_write": false, 00:09:37.112 "abort": true, 00:09:37.112 "seek_hole": false, 00:09:37.112 "seek_data": false, 00:09:37.112 "copy": true, 00:09:37.112 "nvme_iov_md": false 00:09:37.112 }, 00:09:37.112 "memory_domains": [ 00:09:37.112 { 00:09:37.112 "dma_device_id": "system", 00:09:37.112 "dma_device_type": 1 00:09:37.112 }, 00:09:37.112 { 00:09:37.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.112 "dma_device_type": 2 00:09:37.112 } 00:09:37.112 ], 00:09:37.112 "driver_specific": {} 00:09:37.112 } 00:09:37.112 ] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.112 09:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.112 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.112 "name": "Existed_Raid", 00:09:37.112 "uuid": "75ba7983-07f1-4ffe-b875-2766e19a0328", 00:09:37.112 "strip_size_kb": 0, 00:09:37.112 "state": "configuring", 00:09:37.112 "raid_level": "raid1", 00:09:37.112 "superblock": true, 00:09:37.112 "num_base_bdevs": 3, 00:09:37.112 "num_base_bdevs_discovered": 1, 00:09:37.112 "num_base_bdevs_operational": 3, 00:09:37.112 "base_bdevs_list": [ 00:09:37.112 { 00:09:37.112 "name": "BaseBdev1", 00:09:37.112 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:37.112 "is_configured": true, 00:09:37.112 "data_offset": 2048, 00:09:37.112 "data_size": 63488 00:09:37.112 }, 00:09:37.112 { 00:09:37.112 "name": "BaseBdev2", 00:09:37.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.112 "is_configured": false, 00:09:37.112 "data_offset": 0, 00:09:37.112 "data_size": 0 00:09:37.112 }, 00:09:37.112 { 00:09:37.112 "name": "BaseBdev3", 00:09:37.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.112 "is_configured": false, 00:09:37.112 "data_offset": 0, 00:09:37.112 "data_size": 0 00:09:37.112 } 00:09:37.112 ] 00:09:37.112 }' 00:09:37.112 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.112 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.372 [2024-12-12 09:23:11.385519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.372 [2024-12-12 09:23:11.385572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.372 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.631 [2024-12-12 09:23:11.397543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.631 [2024-12-12 09:23:11.399663] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.631 [2024-12-12 09:23:11.399716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.631 [2024-12-12 09:23:11.399727] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.631 [2024-12-12 09:23:11.399736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.631 "name": "Existed_Raid", 00:09:37.631 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:37.631 "strip_size_kb": 0, 00:09:37.631 "state": "configuring", 00:09:37.631 "raid_level": "raid1", 00:09:37.631 "superblock": true, 00:09:37.631 "num_base_bdevs": 3, 00:09:37.631 "num_base_bdevs_discovered": 1, 00:09:37.631 "num_base_bdevs_operational": 3, 00:09:37.631 "base_bdevs_list": [ 00:09:37.631 { 00:09:37.631 "name": "BaseBdev1", 00:09:37.631 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:37.631 "is_configured": true, 00:09:37.631 "data_offset": 2048, 00:09:37.631 "data_size": 63488 00:09:37.631 }, 00:09:37.631 { 00:09:37.631 "name": "BaseBdev2", 00:09:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.631 "is_configured": false, 00:09:37.631 "data_offset": 0, 00:09:37.631 "data_size": 0 00:09:37.631 }, 00:09:37.631 { 00:09:37.631 "name": "BaseBdev3", 00:09:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.631 "is_configured": false, 00:09:37.631 "data_offset": 0, 00:09:37.631 "data_size": 0 00:09:37.631 } 00:09:37.631 ] 00:09:37.631 }' 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.631 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.890 [2024-12-12 09:23:11.868394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.890 BaseBdev2 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.890 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.891 [ 00:09:37.891 { 00:09:37.891 "name": "BaseBdev2", 00:09:37.891 "aliases": [ 00:09:37.891 "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813" 00:09:37.891 ], 00:09:37.891 "product_name": "Malloc disk", 00:09:37.891 "block_size": 512, 00:09:37.891 "num_blocks": 65536, 00:09:37.891 "uuid": "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813", 00:09:37.891 "assigned_rate_limits": { 00:09:37.891 "rw_ios_per_sec": 0, 00:09:37.891 "rw_mbytes_per_sec": 0, 00:09:37.891 "r_mbytes_per_sec": 0, 00:09:37.891 "w_mbytes_per_sec": 0 00:09:37.891 }, 00:09:37.891 "claimed": true, 00:09:37.891 "claim_type": "exclusive_write", 00:09:37.891 "zoned": false, 00:09:37.891 "supported_io_types": { 00:09:37.891 "read": true, 00:09:37.891 "write": true, 00:09:37.891 "unmap": true, 00:09:37.891 "flush": true, 00:09:37.891 "reset": true, 00:09:37.891 "nvme_admin": false, 00:09:37.891 "nvme_io": false, 00:09:37.891 "nvme_io_md": false, 00:09:37.891 "write_zeroes": true, 00:09:37.891 "zcopy": true, 00:09:37.891 "get_zone_info": false, 00:09:37.891 "zone_management": false, 00:09:37.891 "zone_append": false, 00:09:37.891 "compare": false, 00:09:37.891 "compare_and_write": false, 00:09:37.891 "abort": true, 00:09:37.891 "seek_hole": false, 00:09:37.891 "seek_data": false, 00:09:37.891 "copy": true, 00:09:37.891 "nvme_iov_md": false 00:09:37.891 }, 00:09:37.891 "memory_domains": [ 00:09:37.891 { 00:09:37.891 "dma_device_id": "system", 00:09:37.891 "dma_device_type": 1 00:09:37.891 }, 00:09:37.891 { 00:09:37.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.891 "dma_device_type": 2 00:09:37.891 } 00:09:37.891 ], 00:09:37.891 "driver_specific": {} 00:09:37.891 } 00:09:37.891 ] 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.891 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.150 "name": "Existed_Raid", 00:09:38.150 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:38.150 "strip_size_kb": 0, 00:09:38.150 "state": "configuring", 00:09:38.150 "raid_level": "raid1", 00:09:38.150 "superblock": true, 00:09:38.150 "num_base_bdevs": 3, 00:09:38.150 "num_base_bdevs_discovered": 2, 00:09:38.150 "num_base_bdevs_operational": 3, 00:09:38.150 "base_bdevs_list": [ 00:09:38.150 { 00:09:38.150 "name": "BaseBdev1", 00:09:38.150 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:38.150 "is_configured": true, 00:09:38.150 "data_offset": 2048, 00:09:38.150 "data_size": 63488 00:09:38.150 }, 00:09:38.150 { 00:09:38.150 "name": "BaseBdev2", 00:09:38.150 "uuid": "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813", 00:09:38.150 "is_configured": true, 00:09:38.150 "data_offset": 2048, 00:09:38.150 "data_size": 63488 00:09:38.150 }, 00:09:38.150 { 00:09:38.150 "name": "BaseBdev3", 00:09:38.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.150 "is_configured": false, 00:09:38.150 "data_offset": 0, 00:09:38.150 "data_size": 0 00:09:38.150 } 00:09:38.150 ] 00:09:38.150 }' 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.150 09:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.410 [2024-12-12 09:23:12.371101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.410 [2024-12-12 09:23:12.371473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.410 [2024-12-12 09:23:12.371501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.410 [2024-12-12 09:23:12.371812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.410 [2024-12-12 09:23:12.372016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.410 [2024-12-12 09:23:12.372027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.410 BaseBdev3 00:09:38.410 [2024-12-12 09:23:12.372201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.410 [ 00:09:38.410 { 00:09:38.410 "name": "BaseBdev3", 00:09:38.410 "aliases": [ 00:09:38.410 "208ca2e8-f3c6-48e2-8834-16347e2ee6d9" 00:09:38.410 ], 00:09:38.410 "product_name": "Malloc disk", 00:09:38.410 "block_size": 512, 00:09:38.410 "num_blocks": 65536, 00:09:38.410 "uuid": "208ca2e8-f3c6-48e2-8834-16347e2ee6d9", 00:09:38.410 "assigned_rate_limits": { 00:09:38.410 "rw_ios_per_sec": 0, 00:09:38.410 "rw_mbytes_per_sec": 0, 00:09:38.410 "r_mbytes_per_sec": 0, 00:09:38.410 "w_mbytes_per_sec": 0 00:09:38.410 }, 00:09:38.410 "claimed": true, 00:09:38.410 "claim_type": "exclusive_write", 00:09:38.410 "zoned": false, 00:09:38.410 "supported_io_types": { 00:09:38.410 "read": true, 00:09:38.410 "write": true, 00:09:38.410 "unmap": true, 00:09:38.410 "flush": true, 00:09:38.410 "reset": true, 00:09:38.410 "nvme_admin": false, 00:09:38.410 "nvme_io": false, 00:09:38.410 "nvme_io_md": false, 00:09:38.410 "write_zeroes": true, 00:09:38.410 "zcopy": true, 00:09:38.410 "get_zone_info": false, 00:09:38.410 "zone_management": false, 00:09:38.410 "zone_append": false, 00:09:38.410 "compare": false, 00:09:38.410 "compare_and_write": false, 00:09:38.410 "abort": true, 00:09:38.410 "seek_hole": false, 00:09:38.410 "seek_data": false, 00:09:38.410 "copy": true, 00:09:38.410 "nvme_iov_md": false 00:09:38.410 }, 00:09:38.410 "memory_domains": [ 00:09:38.410 { 00:09:38.410 "dma_device_id": "system", 00:09:38.410 "dma_device_type": 1 00:09:38.410 }, 00:09:38.410 { 00:09:38.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.410 "dma_device_type": 2 00:09:38.410 } 00:09:38.410 ], 00:09:38.410 "driver_specific": {} 00:09:38.410 } 00:09:38.410 ] 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.410 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.411 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.670 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.670 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.670 "name": "Existed_Raid", 00:09:38.670 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:38.670 "strip_size_kb": 0, 00:09:38.670 "state": "online", 00:09:38.670 "raid_level": "raid1", 00:09:38.670 "superblock": true, 00:09:38.670 "num_base_bdevs": 3, 00:09:38.670 "num_base_bdevs_discovered": 3, 00:09:38.670 "num_base_bdevs_operational": 3, 00:09:38.670 "base_bdevs_list": [ 00:09:38.670 { 00:09:38.670 "name": "BaseBdev1", 00:09:38.670 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:38.670 "is_configured": true, 00:09:38.670 "data_offset": 2048, 00:09:38.670 "data_size": 63488 00:09:38.670 }, 00:09:38.670 { 00:09:38.670 "name": "BaseBdev2", 00:09:38.670 "uuid": "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813", 00:09:38.670 "is_configured": true, 00:09:38.670 "data_offset": 2048, 00:09:38.670 "data_size": 63488 00:09:38.670 }, 00:09:38.670 { 00:09:38.670 "name": "BaseBdev3", 00:09:38.670 "uuid": "208ca2e8-f3c6-48e2-8834-16347e2ee6d9", 00:09:38.670 "is_configured": true, 00:09:38.670 "data_offset": 2048, 00:09:38.670 "data_size": 63488 00:09:38.670 } 00:09:38.670 ] 00:09:38.670 }' 00:09:38.670 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.670 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 [2024-12-12 09:23:12.830618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.930 "name": "Existed_Raid", 00:09:38.930 "aliases": [ 00:09:38.930 "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7" 00:09:38.930 ], 00:09:38.930 "product_name": "Raid Volume", 00:09:38.930 "block_size": 512, 00:09:38.930 "num_blocks": 63488, 00:09:38.930 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:38.930 "assigned_rate_limits": { 00:09:38.930 "rw_ios_per_sec": 0, 00:09:38.930 "rw_mbytes_per_sec": 0, 00:09:38.930 "r_mbytes_per_sec": 0, 00:09:38.930 "w_mbytes_per_sec": 0 00:09:38.930 }, 00:09:38.930 "claimed": false, 00:09:38.930 "zoned": false, 00:09:38.930 "supported_io_types": { 00:09:38.930 "read": true, 00:09:38.930 "write": true, 00:09:38.930 "unmap": false, 00:09:38.930 "flush": false, 00:09:38.930 "reset": true, 00:09:38.930 "nvme_admin": false, 00:09:38.930 "nvme_io": false, 00:09:38.930 "nvme_io_md": false, 00:09:38.930 "write_zeroes": true, 00:09:38.930 "zcopy": false, 00:09:38.930 "get_zone_info": false, 00:09:38.930 "zone_management": false, 00:09:38.930 "zone_append": false, 00:09:38.930 "compare": false, 00:09:38.930 "compare_and_write": false, 00:09:38.930 "abort": false, 00:09:38.930 "seek_hole": false, 00:09:38.930 "seek_data": false, 00:09:38.930 "copy": false, 00:09:38.930 "nvme_iov_md": false 00:09:38.930 }, 00:09:38.930 "memory_domains": [ 00:09:38.930 { 00:09:38.930 "dma_device_id": "system", 00:09:38.930 "dma_device_type": 1 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.930 "dma_device_type": 2 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "dma_device_id": "system", 00:09:38.930 "dma_device_type": 1 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.930 "dma_device_type": 2 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "dma_device_id": "system", 00:09:38.930 "dma_device_type": 1 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.930 "dma_device_type": 2 00:09:38.930 } 00:09:38.930 ], 00:09:38.930 "driver_specific": { 00:09:38.930 "raid": { 00:09:38.930 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:38.930 "strip_size_kb": 0, 00:09:38.930 "state": "online", 00:09:38.930 "raid_level": "raid1", 00:09:38.930 "superblock": true, 00:09:38.930 "num_base_bdevs": 3, 00:09:38.930 "num_base_bdevs_discovered": 3, 00:09:38.930 "num_base_bdevs_operational": 3, 00:09:38.930 "base_bdevs_list": [ 00:09:38.930 { 00:09:38.930 "name": "BaseBdev1", 00:09:38.930 "uuid": "039776e3-267c-41b6-bd8d-895fa125d4bb", 00:09:38.930 "is_configured": true, 00:09:38.930 "data_offset": 2048, 00:09:38.930 "data_size": 63488 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "name": "BaseBdev2", 00:09:38.930 "uuid": "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813", 00:09:38.930 "is_configured": true, 00:09:38.930 "data_offset": 2048, 00:09:38.930 "data_size": 63488 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "name": "BaseBdev3", 00:09:38.930 "uuid": "208ca2e8-f3c6-48e2-8834-16347e2ee6d9", 00:09:38.930 "is_configured": true, 00:09:38.930 "data_offset": 2048, 00:09:38.930 "data_size": 63488 00:09:38.930 } 00:09:38.930 ] 00:09:38.930 } 00:09:38.930 } 00:09:38.930 }' 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.930 BaseBdev2 00:09:38.930 BaseBdev3' 00:09:38.930 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.190 09:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.190 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.191 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [2024-12-12 09:23:13.113901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.450 "name": "Existed_Raid", 00:09:39.450 "uuid": "9b7e738c-6e12-4f3a-a9a9-ccae8084bba7", 00:09:39.450 "strip_size_kb": 0, 00:09:39.450 "state": "online", 00:09:39.450 "raid_level": "raid1", 00:09:39.450 "superblock": true, 00:09:39.450 "num_base_bdevs": 3, 00:09:39.450 "num_base_bdevs_discovered": 2, 00:09:39.450 "num_base_bdevs_operational": 2, 00:09:39.450 "base_bdevs_list": [ 00:09:39.450 { 00:09:39.450 "name": null, 00:09:39.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.450 "is_configured": false, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 63488 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev2", 00:09:39.450 "uuid": "8a7b8ebd-35d5-4ed5-9e6a-1917a6fbd813", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 2048, 00:09:39.450 "data_size": 63488 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev3", 00:09:39.450 "uuid": "208ca2e8-f3c6-48e2-8834-16347e2ee6d9", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 2048, 00:09:39.450 "data_size": 63488 00:09:39.450 } 00:09:39.450 ] 00:09:39.450 }' 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.450 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.710 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 [2024-12-12 09:23:13.722777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 [2024-12-12 09:23:13.865631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.969 [2024-12-12 09:23:13.865761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.969 [2024-12-12 09:23:13.970072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.969 [2024-12-12 09:23:13.970131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.969 [2024-12-12 09:23:13.970146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 09:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 BaseBdev2 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 [ 00:09:40.229 { 00:09:40.229 "name": "BaseBdev2", 00:09:40.229 "aliases": [ 00:09:40.229 "fefc2858-e36f-46cf-9827-36a3da9fce5f" 00:09:40.229 ], 00:09:40.229 "product_name": "Malloc disk", 00:09:40.229 "block_size": 512, 00:09:40.229 "num_blocks": 65536, 00:09:40.229 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:40.229 "assigned_rate_limits": { 00:09:40.229 "rw_ios_per_sec": 0, 00:09:40.229 "rw_mbytes_per_sec": 0, 00:09:40.229 "r_mbytes_per_sec": 0, 00:09:40.229 "w_mbytes_per_sec": 0 00:09:40.229 }, 00:09:40.229 "claimed": false, 00:09:40.229 "zoned": false, 00:09:40.229 "supported_io_types": { 00:09:40.229 "read": true, 00:09:40.229 "write": true, 00:09:40.229 "unmap": true, 00:09:40.229 "flush": true, 00:09:40.229 "reset": true, 00:09:40.229 "nvme_admin": false, 00:09:40.229 "nvme_io": false, 00:09:40.229 "nvme_io_md": false, 00:09:40.229 "write_zeroes": true, 00:09:40.229 "zcopy": true, 00:09:40.229 "get_zone_info": false, 00:09:40.229 "zone_management": false, 00:09:40.229 "zone_append": false, 00:09:40.229 "compare": false, 00:09:40.229 "compare_and_write": false, 00:09:40.229 "abort": true, 00:09:40.229 "seek_hole": false, 00:09:40.229 "seek_data": false, 00:09:40.229 "copy": true, 00:09:40.229 "nvme_iov_md": false 00:09:40.229 }, 00:09:40.229 "memory_domains": [ 00:09:40.229 { 00:09:40.229 "dma_device_id": "system", 00:09:40.229 "dma_device_type": 1 00:09:40.229 }, 00:09:40.229 { 00:09:40.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.229 "dma_device_type": 2 00:09:40.229 } 00:09:40.229 ], 00:09:40.229 "driver_specific": {} 00:09:40.229 } 00:09:40.229 ] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 BaseBdev3 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.229 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.229 [ 00:09:40.229 { 00:09:40.229 "name": "BaseBdev3", 00:09:40.229 "aliases": [ 00:09:40.229 "69d36efb-9b0b-45ef-ae28-3115abe2d592" 00:09:40.229 ], 00:09:40.229 "product_name": "Malloc disk", 00:09:40.229 "block_size": 512, 00:09:40.229 "num_blocks": 65536, 00:09:40.229 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:40.229 "assigned_rate_limits": { 00:09:40.229 "rw_ios_per_sec": 0, 00:09:40.229 "rw_mbytes_per_sec": 0, 00:09:40.229 "r_mbytes_per_sec": 0, 00:09:40.229 "w_mbytes_per_sec": 0 00:09:40.229 }, 00:09:40.229 "claimed": false, 00:09:40.229 "zoned": false, 00:09:40.229 "supported_io_types": { 00:09:40.230 "read": true, 00:09:40.230 "write": true, 00:09:40.230 "unmap": true, 00:09:40.230 "flush": true, 00:09:40.230 "reset": true, 00:09:40.230 "nvme_admin": false, 00:09:40.230 "nvme_io": false, 00:09:40.230 "nvme_io_md": false, 00:09:40.230 "write_zeroes": true, 00:09:40.230 "zcopy": true, 00:09:40.230 "get_zone_info": false, 00:09:40.230 "zone_management": false, 00:09:40.230 "zone_append": false, 00:09:40.230 "compare": false, 00:09:40.230 "compare_and_write": false, 00:09:40.230 "abort": true, 00:09:40.230 "seek_hole": false, 00:09:40.230 "seek_data": false, 00:09:40.230 "copy": true, 00:09:40.230 "nvme_iov_md": false 00:09:40.230 }, 00:09:40.230 "memory_domains": [ 00:09:40.230 { 00:09:40.230 "dma_device_id": "system", 00:09:40.230 "dma_device_type": 1 00:09:40.230 }, 00:09:40.230 { 00:09:40.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.230 "dma_device_type": 2 00:09:40.230 } 00:09:40.230 ], 00:09:40.230 "driver_specific": {} 00:09:40.230 } 00:09:40.230 ] 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.230 [2024-12-12 09:23:14.182368] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.230 [2024-12-12 09:23:14.182456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.230 [2024-12-12 09:23:14.182500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.230 [2024-12-12 09:23:14.184507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.230 "name": "Existed_Raid", 00:09:40.230 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:40.230 "strip_size_kb": 0, 00:09:40.230 "state": "configuring", 00:09:40.230 "raid_level": "raid1", 00:09:40.230 "superblock": true, 00:09:40.230 "num_base_bdevs": 3, 00:09:40.230 "num_base_bdevs_discovered": 2, 00:09:40.230 "num_base_bdevs_operational": 3, 00:09:40.230 "base_bdevs_list": [ 00:09:40.230 { 00:09:40.230 "name": "BaseBdev1", 00:09:40.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.230 "is_configured": false, 00:09:40.230 "data_offset": 0, 00:09:40.230 "data_size": 0 00:09:40.230 }, 00:09:40.230 { 00:09:40.230 "name": "BaseBdev2", 00:09:40.230 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:40.230 "is_configured": true, 00:09:40.230 "data_offset": 2048, 00:09:40.230 "data_size": 63488 00:09:40.230 }, 00:09:40.230 { 00:09:40.230 "name": "BaseBdev3", 00:09:40.230 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:40.230 "is_configured": true, 00:09:40.230 "data_offset": 2048, 00:09:40.230 "data_size": 63488 00:09:40.230 } 00:09:40.230 ] 00:09:40.230 }' 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.230 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 [2024-12-12 09:23:14.657562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.807 "name": "Existed_Raid", 00:09:40.807 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:40.807 "strip_size_kb": 0, 00:09:40.807 "state": "configuring", 00:09:40.807 "raid_level": "raid1", 00:09:40.807 "superblock": true, 00:09:40.807 "num_base_bdevs": 3, 00:09:40.807 "num_base_bdevs_discovered": 1, 00:09:40.807 "num_base_bdevs_operational": 3, 00:09:40.807 "base_bdevs_list": [ 00:09:40.807 { 00:09:40.807 "name": "BaseBdev1", 00:09:40.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.807 "is_configured": false, 00:09:40.807 "data_offset": 0, 00:09:40.807 "data_size": 0 00:09:40.807 }, 00:09:40.807 { 00:09:40.807 "name": null, 00:09:40.807 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:40.807 "is_configured": false, 00:09:40.807 "data_offset": 0, 00:09:40.807 "data_size": 63488 00:09:40.807 }, 00:09:40.807 { 00:09:40.807 "name": "BaseBdev3", 00:09:40.807 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:40.807 "is_configured": true, 00:09:40.807 "data_offset": 2048, 00:09:40.807 "data_size": 63488 00:09:40.807 } 00:09:40.807 ] 00:09:40.807 }' 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.807 09:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.385 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.386 [2024-12-12 09:23:15.237631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.386 BaseBdev1 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.386 [ 00:09:41.386 { 00:09:41.386 "name": "BaseBdev1", 00:09:41.386 "aliases": [ 00:09:41.386 "4b79914d-282f-4568-b7a7-3917c882f765" 00:09:41.386 ], 00:09:41.386 "product_name": "Malloc disk", 00:09:41.386 "block_size": 512, 00:09:41.386 "num_blocks": 65536, 00:09:41.386 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:41.386 "assigned_rate_limits": { 00:09:41.386 "rw_ios_per_sec": 0, 00:09:41.386 "rw_mbytes_per_sec": 0, 00:09:41.386 "r_mbytes_per_sec": 0, 00:09:41.386 "w_mbytes_per_sec": 0 00:09:41.386 }, 00:09:41.386 "claimed": true, 00:09:41.386 "claim_type": "exclusive_write", 00:09:41.386 "zoned": false, 00:09:41.386 "supported_io_types": { 00:09:41.386 "read": true, 00:09:41.386 "write": true, 00:09:41.386 "unmap": true, 00:09:41.386 "flush": true, 00:09:41.386 "reset": true, 00:09:41.386 "nvme_admin": false, 00:09:41.386 "nvme_io": false, 00:09:41.386 "nvme_io_md": false, 00:09:41.386 "write_zeroes": true, 00:09:41.386 "zcopy": true, 00:09:41.386 "get_zone_info": false, 00:09:41.386 "zone_management": false, 00:09:41.386 "zone_append": false, 00:09:41.386 "compare": false, 00:09:41.386 "compare_and_write": false, 00:09:41.386 "abort": true, 00:09:41.386 "seek_hole": false, 00:09:41.386 "seek_data": false, 00:09:41.386 "copy": true, 00:09:41.386 "nvme_iov_md": false 00:09:41.386 }, 00:09:41.386 "memory_domains": [ 00:09:41.386 { 00:09:41.386 "dma_device_id": "system", 00:09:41.386 "dma_device_type": 1 00:09:41.386 }, 00:09:41.386 { 00:09:41.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.386 "dma_device_type": 2 00:09:41.386 } 00:09:41.386 ], 00:09:41.386 "driver_specific": {} 00:09:41.386 } 00:09:41.386 ] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.386 "name": "Existed_Raid", 00:09:41.386 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:41.386 "strip_size_kb": 0, 00:09:41.386 "state": "configuring", 00:09:41.386 "raid_level": "raid1", 00:09:41.386 "superblock": true, 00:09:41.386 "num_base_bdevs": 3, 00:09:41.386 "num_base_bdevs_discovered": 2, 00:09:41.386 "num_base_bdevs_operational": 3, 00:09:41.386 "base_bdevs_list": [ 00:09:41.386 { 00:09:41.386 "name": "BaseBdev1", 00:09:41.386 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:41.386 "is_configured": true, 00:09:41.386 "data_offset": 2048, 00:09:41.386 "data_size": 63488 00:09:41.386 }, 00:09:41.386 { 00:09:41.386 "name": null, 00:09:41.386 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:41.386 "is_configured": false, 00:09:41.386 "data_offset": 0, 00:09:41.386 "data_size": 63488 00:09:41.386 }, 00:09:41.386 { 00:09:41.386 "name": "BaseBdev3", 00:09:41.386 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:41.386 "is_configured": true, 00:09:41.386 "data_offset": 2048, 00:09:41.386 "data_size": 63488 00:09:41.386 } 00:09:41.386 ] 00:09:41.386 }' 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.386 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.955 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.956 [2024-12-12 09:23:15.772751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.956 "name": "Existed_Raid", 00:09:41.956 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:41.956 "strip_size_kb": 0, 00:09:41.956 "state": "configuring", 00:09:41.956 "raid_level": "raid1", 00:09:41.956 "superblock": true, 00:09:41.956 "num_base_bdevs": 3, 00:09:41.956 "num_base_bdevs_discovered": 1, 00:09:41.956 "num_base_bdevs_operational": 3, 00:09:41.956 "base_bdevs_list": [ 00:09:41.956 { 00:09:41.956 "name": "BaseBdev1", 00:09:41.956 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:41.956 "is_configured": true, 00:09:41.956 "data_offset": 2048, 00:09:41.956 "data_size": 63488 00:09:41.956 }, 00:09:41.956 { 00:09:41.956 "name": null, 00:09:41.956 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:41.956 "is_configured": false, 00:09:41.956 "data_offset": 0, 00:09:41.956 "data_size": 63488 00:09:41.956 }, 00:09:41.956 { 00:09:41.956 "name": null, 00:09:41.956 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:41.956 "is_configured": false, 00:09:41.956 "data_offset": 0, 00:09:41.956 "data_size": 63488 00:09:41.956 } 00:09:41.956 ] 00:09:41.956 }' 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.956 09:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.215 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.215 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.215 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 [2024-12-12 09:23:16.259948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.475 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.476 "name": "Existed_Raid", 00:09:42.476 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:42.476 "strip_size_kb": 0, 00:09:42.476 "state": "configuring", 00:09:42.476 "raid_level": "raid1", 00:09:42.476 "superblock": true, 00:09:42.476 "num_base_bdevs": 3, 00:09:42.476 "num_base_bdevs_discovered": 2, 00:09:42.476 "num_base_bdevs_operational": 3, 00:09:42.476 "base_bdevs_list": [ 00:09:42.476 { 00:09:42.476 "name": "BaseBdev1", 00:09:42.476 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:42.476 "is_configured": true, 00:09:42.476 "data_offset": 2048, 00:09:42.476 "data_size": 63488 00:09:42.476 }, 00:09:42.476 { 00:09:42.476 "name": null, 00:09:42.476 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:42.476 "is_configured": false, 00:09:42.476 "data_offset": 0, 00:09:42.476 "data_size": 63488 00:09:42.476 }, 00:09:42.476 { 00:09:42.476 "name": "BaseBdev3", 00:09:42.476 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:42.476 "is_configured": true, 00:09:42.476 "data_offset": 2048, 00:09:42.476 "data_size": 63488 00:09:42.476 } 00:09:42.476 ] 00:09:42.476 }' 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.476 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:42.735 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.736 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.736 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.736 [2024-12-12 09:23:16.675293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.994 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.994 "name": "Existed_Raid", 00:09:42.994 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:42.994 "strip_size_kb": 0, 00:09:42.994 "state": "configuring", 00:09:42.994 "raid_level": "raid1", 00:09:42.994 "superblock": true, 00:09:42.994 "num_base_bdevs": 3, 00:09:42.995 "num_base_bdevs_discovered": 1, 00:09:42.995 "num_base_bdevs_operational": 3, 00:09:42.995 "base_bdevs_list": [ 00:09:42.995 { 00:09:42.995 "name": null, 00:09:42.995 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:42.995 "is_configured": false, 00:09:42.995 "data_offset": 0, 00:09:42.995 "data_size": 63488 00:09:42.995 }, 00:09:42.995 { 00:09:42.995 "name": null, 00:09:42.995 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:42.995 "is_configured": false, 00:09:42.995 "data_offset": 0, 00:09:42.995 "data_size": 63488 00:09:42.995 }, 00:09:42.995 { 00:09:42.995 "name": "BaseBdev3", 00:09:42.995 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:42.995 "is_configured": true, 00:09:42.995 "data_offset": 2048, 00:09:42.995 "data_size": 63488 00:09:42.995 } 00:09:42.995 ] 00:09:42.995 }' 00:09:42.995 09:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.995 09:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.254 [2024-12-12 09:23:17.237471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.254 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.514 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.514 "name": "Existed_Raid", 00:09:43.514 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:43.514 "strip_size_kb": 0, 00:09:43.514 "state": "configuring", 00:09:43.514 "raid_level": "raid1", 00:09:43.514 "superblock": true, 00:09:43.514 "num_base_bdevs": 3, 00:09:43.514 "num_base_bdevs_discovered": 2, 00:09:43.514 "num_base_bdevs_operational": 3, 00:09:43.514 "base_bdevs_list": [ 00:09:43.514 { 00:09:43.514 "name": null, 00:09:43.514 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:43.514 "is_configured": false, 00:09:43.514 "data_offset": 0, 00:09:43.514 "data_size": 63488 00:09:43.514 }, 00:09:43.514 { 00:09:43.514 "name": "BaseBdev2", 00:09:43.514 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:43.514 "is_configured": true, 00:09:43.514 "data_offset": 2048, 00:09:43.514 "data_size": 63488 00:09:43.514 }, 00:09:43.514 { 00:09:43.514 "name": "BaseBdev3", 00:09:43.514 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:43.514 "is_configured": true, 00:09:43.514 "data_offset": 2048, 00:09:43.514 "data_size": 63488 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 }' 00:09:43.514 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.514 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b79914d-282f-4568-b7a7-3917c882f765 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.774 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.034 [2024-12-12 09:23:17.830616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.034 [2024-12-12 09:23:17.830924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.034 [2024-12-12 09:23:17.830983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.034 [2024-12-12 09:23:17.831309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:44.034 [2024-12-12 09:23:17.831508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.034 NewBaseBdev 00:09:44.034 [2024-12-12 09:23:17.831551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:44.034 [2024-12-12 09:23:17.831715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.034 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.034 [ 00:09:44.034 { 00:09:44.034 "name": "NewBaseBdev", 00:09:44.034 "aliases": [ 00:09:44.034 "4b79914d-282f-4568-b7a7-3917c882f765" 00:09:44.034 ], 00:09:44.034 "product_name": "Malloc disk", 00:09:44.034 "block_size": 512, 00:09:44.034 "num_blocks": 65536, 00:09:44.034 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:44.034 "assigned_rate_limits": { 00:09:44.034 "rw_ios_per_sec": 0, 00:09:44.034 "rw_mbytes_per_sec": 0, 00:09:44.034 "r_mbytes_per_sec": 0, 00:09:44.034 "w_mbytes_per_sec": 0 00:09:44.034 }, 00:09:44.034 "claimed": true, 00:09:44.034 "claim_type": "exclusive_write", 00:09:44.034 "zoned": false, 00:09:44.034 "supported_io_types": { 00:09:44.034 "read": true, 00:09:44.034 "write": true, 00:09:44.034 "unmap": true, 00:09:44.034 "flush": true, 00:09:44.034 "reset": true, 00:09:44.034 "nvme_admin": false, 00:09:44.034 "nvme_io": false, 00:09:44.034 "nvme_io_md": false, 00:09:44.034 "write_zeroes": true, 00:09:44.034 "zcopy": true, 00:09:44.034 "get_zone_info": false, 00:09:44.034 "zone_management": false, 00:09:44.034 "zone_append": false, 00:09:44.034 "compare": false, 00:09:44.035 "compare_and_write": false, 00:09:44.035 "abort": true, 00:09:44.035 "seek_hole": false, 00:09:44.035 "seek_data": false, 00:09:44.035 "copy": true, 00:09:44.035 "nvme_iov_md": false 00:09:44.035 }, 00:09:44.035 "memory_domains": [ 00:09:44.035 { 00:09:44.035 "dma_device_id": "system", 00:09:44.035 "dma_device_type": 1 00:09:44.035 }, 00:09:44.035 { 00:09:44.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.035 "dma_device_type": 2 00:09:44.035 } 00:09:44.035 ], 00:09:44.035 "driver_specific": {} 00:09:44.035 } 00:09:44.035 ] 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.035 "name": "Existed_Raid", 00:09:44.035 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:44.035 "strip_size_kb": 0, 00:09:44.035 "state": "online", 00:09:44.035 "raid_level": "raid1", 00:09:44.035 "superblock": true, 00:09:44.035 "num_base_bdevs": 3, 00:09:44.035 "num_base_bdevs_discovered": 3, 00:09:44.035 "num_base_bdevs_operational": 3, 00:09:44.035 "base_bdevs_list": [ 00:09:44.035 { 00:09:44.035 "name": "NewBaseBdev", 00:09:44.035 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:44.035 "is_configured": true, 00:09:44.035 "data_offset": 2048, 00:09:44.035 "data_size": 63488 00:09:44.035 }, 00:09:44.035 { 00:09:44.035 "name": "BaseBdev2", 00:09:44.035 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:44.035 "is_configured": true, 00:09:44.035 "data_offset": 2048, 00:09:44.035 "data_size": 63488 00:09:44.035 }, 00:09:44.035 { 00:09:44.035 "name": "BaseBdev3", 00:09:44.035 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:44.035 "is_configured": true, 00:09:44.035 "data_offset": 2048, 00:09:44.035 "data_size": 63488 00:09:44.035 } 00:09:44.035 ] 00:09:44.035 }' 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.035 09:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.298 [2024-12-12 09:23:18.290105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.298 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.558 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.558 "name": "Existed_Raid", 00:09:44.558 "aliases": [ 00:09:44.558 "8fccca7c-167b-4297-a132-2164188f8435" 00:09:44.558 ], 00:09:44.558 "product_name": "Raid Volume", 00:09:44.558 "block_size": 512, 00:09:44.558 "num_blocks": 63488, 00:09:44.558 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:44.558 "assigned_rate_limits": { 00:09:44.558 "rw_ios_per_sec": 0, 00:09:44.558 "rw_mbytes_per_sec": 0, 00:09:44.558 "r_mbytes_per_sec": 0, 00:09:44.558 "w_mbytes_per_sec": 0 00:09:44.558 }, 00:09:44.558 "claimed": false, 00:09:44.558 "zoned": false, 00:09:44.558 "supported_io_types": { 00:09:44.558 "read": true, 00:09:44.558 "write": true, 00:09:44.558 "unmap": false, 00:09:44.558 "flush": false, 00:09:44.558 "reset": true, 00:09:44.558 "nvme_admin": false, 00:09:44.558 "nvme_io": false, 00:09:44.558 "nvme_io_md": false, 00:09:44.558 "write_zeroes": true, 00:09:44.558 "zcopy": false, 00:09:44.558 "get_zone_info": false, 00:09:44.558 "zone_management": false, 00:09:44.558 "zone_append": false, 00:09:44.558 "compare": false, 00:09:44.558 "compare_and_write": false, 00:09:44.558 "abort": false, 00:09:44.558 "seek_hole": false, 00:09:44.558 "seek_data": false, 00:09:44.558 "copy": false, 00:09:44.558 "nvme_iov_md": false 00:09:44.558 }, 00:09:44.558 "memory_domains": [ 00:09:44.558 { 00:09:44.558 "dma_device_id": "system", 00:09:44.558 "dma_device_type": 1 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.558 "dma_device_type": 2 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "dma_device_id": "system", 00:09:44.558 "dma_device_type": 1 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.558 "dma_device_type": 2 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "dma_device_id": "system", 00:09:44.558 "dma_device_type": 1 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.558 "dma_device_type": 2 00:09:44.558 } 00:09:44.558 ], 00:09:44.558 "driver_specific": { 00:09:44.558 "raid": { 00:09:44.558 "uuid": "8fccca7c-167b-4297-a132-2164188f8435", 00:09:44.558 "strip_size_kb": 0, 00:09:44.558 "state": "online", 00:09:44.558 "raid_level": "raid1", 00:09:44.558 "superblock": true, 00:09:44.558 "num_base_bdevs": 3, 00:09:44.558 "num_base_bdevs_discovered": 3, 00:09:44.558 "num_base_bdevs_operational": 3, 00:09:44.558 "base_bdevs_list": [ 00:09:44.558 { 00:09:44.558 "name": "NewBaseBdev", 00:09:44.558 "uuid": "4b79914d-282f-4568-b7a7-3917c882f765", 00:09:44.558 "is_configured": true, 00:09:44.558 "data_offset": 2048, 00:09:44.558 "data_size": 63488 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "name": "BaseBdev2", 00:09:44.558 "uuid": "fefc2858-e36f-46cf-9827-36a3da9fce5f", 00:09:44.558 "is_configured": true, 00:09:44.558 "data_offset": 2048, 00:09:44.558 "data_size": 63488 00:09:44.558 }, 00:09:44.558 { 00:09:44.558 "name": "BaseBdev3", 00:09:44.558 "uuid": "69d36efb-9b0b-45ef-ae28-3115abe2d592", 00:09:44.558 "is_configured": true, 00:09:44.558 "data_offset": 2048, 00:09:44.558 "data_size": 63488 00:09:44.558 } 00:09:44.558 ] 00:09:44.559 } 00:09:44.559 } 00:09:44.559 }' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:44.559 BaseBdev2 00:09:44.559 BaseBdev3' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.559 [2024-12-12 09:23:18.521470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.559 [2024-12-12 09:23:18.521561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.559 [2024-12-12 09:23:18.521671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.559 [2024-12-12 09:23:18.522045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.559 [2024-12-12 09:23:18.522059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69163 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69163 ']' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69163 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69163 00:09:44.559 killing process with pid 69163 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69163' 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69163 00:09:44.559 [2024-12-12 09:23:18.571086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.559 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69163 00:09:45.128 [2024-12-12 09:23:18.894057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.065 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.065 00:09:46.065 real 0m10.587s 00:09:46.065 user 0m16.600s 00:09:46.065 sys 0m1.931s 00:09:46.065 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.065 ************************************ 00:09:46.065 END TEST raid_state_function_test_sb 00:09:46.065 ************************************ 00:09:46.065 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.324 09:23:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:46.324 09:23:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.324 09:23:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.324 09:23:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.324 ************************************ 00:09:46.324 START TEST raid_superblock_test 00:09:46.324 ************************************ 00:09:46.324 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69783 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69783 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69783 ']' 00:09:46.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.325 09:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.325 [2024-12-12 09:23:20.240716] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:46.325 [2024-12-12 09:23:20.240911] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69783 ] 00:09:46.584 [2024-12-12 09:23:20.416106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.584 [2024-12-12 09:23:20.549530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.843 [2024-12-12 09:23:20.772698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.843 [2024-12-12 09:23:20.772880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.102 malloc1 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.102 [2024-12-12 09:23:21.109711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.102 [2024-12-12 09:23:21.109774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.102 [2024-12-12 09:23:21.109797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.102 [2024-12-12 09:23:21.109808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.102 [2024-12-12 09:23:21.112304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.102 [2024-12-12 09:23:21.112341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.102 pt1 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.102 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 malloc2 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 [2024-12-12 09:23:21.170773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.362 [2024-12-12 09:23:21.170875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.362 [2024-12-12 09:23:21.170917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.362 [2024-12-12 09:23:21.170945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.362 [2024-12-12 09:23:21.173351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.362 [2024-12-12 09:23:21.173422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.362 pt2 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 malloc3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 [2024-12-12 09:23:21.248666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.362 [2024-12-12 09:23:21.248785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.362 [2024-12-12 09:23:21.248827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.362 [2024-12-12 09:23:21.248903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.362 [2024-12-12 09:23:21.251397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.362 [2024-12-12 09:23:21.251470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.362 pt3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 [2024-12-12 09:23:21.260694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.362 [2024-12-12 09:23:21.262893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.362 [2024-12-12 09:23:21.263030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.362 [2024-12-12 09:23:21.263223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:47.362 [2024-12-12 09:23:21.263276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.362 [2024-12-12 09:23:21.263540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.362 [2024-12-12 09:23:21.263781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:47.362 [2024-12-12 09:23:21.263832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:47.362 [2024-12-12 09:23:21.264038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.362 "name": "raid_bdev1", 00:09:47.362 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:47.362 "strip_size_kb": 0, 00:09:47.362 "state": "online", 00:09:47.362 "raid_level": "raid1", 00:09:47.362 "superblock": true, 00:09:47.362 "num_base_bdevs": 3, 00:09:47.362 "num_base_bdevs_discovered": 3, 00:09:47.362 "num_base_bdevs_operational": 3, 00:09:47.362 "base_bdevs_list": [ 00:09:47.362 { 00:09:47.362 "name": "pt1", 00:09:47.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.362 "is_configured": true, 00:09:47.362 "data_offset": 2048, 00:09:47.362 "data_size": 63488 00:09:47.362 }, 00:09:47.362 { 00:09:47.362 "name": "pt2", 00:09:47.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.362 "is_configured": true, 00:09:47.362 "data_offset": 2048, 00:09:47.362 "data_size": 63488 00:09:47.362 }, 00:09:47.362 { 00:09:47.362 "name": "pt3", 00:09:47.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.362 "is_configured": true, 00:09:47.362 "data_offset": 2048, 00:09:47.362 "data_size": 63488 00:09:47.362 } 00:09:47.362 ] 00:09:47.362 }' 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.362 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.931 [2024-12-12 09:23:21.708249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.931 "name": "raid_bdev1", 00:09:47.931 "aliases": [ 00:09:47.931 "ff249b06-d189-4550-9c8c-8603e80820f5" 00:09:47.931 ], 00:09:47.931 "product_name": "Raid Volume", 00:09:47.931 "block_size": 512, 00:09:47.931 "num_blocks": 63488, 00:09:47.931 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:47.931 "assigned_rate_limits": { 00:09:47.931 "rw_ios_per_sec": 0, 00:09:47.931 "rw_mbytes_per_sec": 0, 00:09:47.931 "r_mbytes_per_sec": 0, 00:09:47.931 "w_mbytes_per_sec": 0 00:09:47.931 }, 00:09:47.931 "claimed": false, 00:09:47.931 "zoned": false, 00:09:47.931 "supported_io_types": { 00:09:47.931 "read": true, 00:09:47.931 "write": true, 00:09:47.931 "unmap": false, 00:09:47.931 "flush": false, 00:09:47.931 "reset": true, 00:09:47.931 "nvme_admin": false, 00:09:47.931 "nvme_io": false, 00:09:47.931 "nvme_io_md": false, 00:09:47.931 "write_zeroes": true, 00:09:47.931 "zcopy": false, 00:09:47.931 "get_zone_info": false, 00:09:47.931 "zone_management": false, 00:09:47.931 "zone_append": false, 00:09:47.931 "compare": false, 00:09:47.931 "compare_and_write": false, 00:09:47.931 "abort": false, 00:09:47.931 "seek_hole": false, 00:09:47.931 "seek_data": false, 00:09:47.931 "copy": false, 00:09:47.931 "nvme_iov_md": false 00:09:47.931 }, 00:09:47.931 "memory_domains": [ 00:09:47.931 { 00:09:47.931 "dma_device_id": "system", 00:09:47.931 "dma_device_type": 1 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.931 "dma_device_type": 2 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "dma_device_id": "system", 00:09:47.931 "dma_device_type": 1 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.931 "dma_device_type": 2 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "dma_device_id": "system", 00:09:47.931 "dma_device_type": 1 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.931 "dma_device_type": 2 00:09:47.931 } 00:09:47.931 ], 00:09:47.931 "driver_specific": { 00:09:47.931 "raid": { 00:09:47.931 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:47.931 "strip_size_kb": 0, 00:09:47.931 "state": "online", 00:09:47.931 "raid_level": "raid1", 00:09:47.931 "superblock": true, 00:09:47.931 "num_base_bdevs": 3, 00:09:47.931 "num_base_bdevs_discovered": 3, 00:09:47.931 "num_base_bdevs_operational": 3, 00:09:47.931 "base_bdevs_list": [ 00:09:47.931 { 00:09:47.931 "name": "pt1", 00:09:47.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.931 "is_configured": true, 00:09:47.931 "data_offset": 2048, 00:09:47.931 "data_size": 63488 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "name": "pt2", 00:09:47.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.931 "is_configured": true, 00:09:47.931 "data_offset": 2048, 00:09:47.931 "data_size": 63488 00:09:47.931 }, 00:09:47.931 { 00:09:47.931 "name": "pt3", 00:09:47.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.931 "is_configured": true, 00:09:47.931 "data_offset": 2048, 00:09:47.931 "data_size": 63488 00:09:47.931 } 00:09:47.931 ] 00:09:47.931 } 00:09:47.931 } 00:09:47.931 }' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.931 pt2 00:09:47.931 pt3' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.931 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.932 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.191 09:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.191 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.191 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.191 09:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 [2024-12-12 09:23:22.007665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ff249b06-d189-4550-9c8c-8603e80820f5 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ff249b06-d189-4550-9c8c-8603e80820f5 ']' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 [2024-12-12 09:23:22.051310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.191 [2024-12-12 09:23:22.051343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.191 [2024-12-12 09:23:22.051435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.191 [2024-12-12 09:23:22.051521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.191 [2024-12-12 09:23:22.051531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.191 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.192 [2024-12-12 09:23:22.183145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.192 [2024-12-12 09:23:22.185299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.192 [2024-12-12 09:23:22.185365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.192 [2024-12-12 09:23:22.185423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.192 [2024-12-12 09:23:22.185476] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.192 [2024-12-12 09:23:22.185495] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.192 [2024-12-12 09:23:22.185512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.192 [2024-12-12 09:23:22.185523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:48.192 request: 00:09:48.192 { 00:09:48.192 "name": "raid_bdev1", 00:09:48.192 "raid_level": "raid1", 00:09:48.192 "base_bdevs": [ 00:09:48.192 "malloc1", 00:09:48.192 "malloc2", 00:09:48.192 "malloc3" 00:09:48.192 ], 00:09:48.192 "superblock": false, 00:09:48.192 "method": "bdev_raid_create", 00:09:48.192 "req_id": 1 00:09:48.192 } 00:09:48.192 Got JSON-RPC error response 00:09:48.192 response: 00:09:48.192 { 00:09:48.192 "code": -17, 00:09:48.192 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.192 } 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.192 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.452 [2024-12-12 09:23:22.239033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.452 [2024-12-12 09:23:22.239080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.452 [2024-12-12 09:23:22.239098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:48.452 [2024-12-12 09:23:22.239108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.452 [2024-12-12 09:23:22.241740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.452 [2024-12-12 09:23:22.241776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.452 [2024-12-12 09:23:22.241868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.452 [2024-12-12 09:23:22.241931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.452 pt1 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.452 "name": "raid_bdev1", 00:09:48.452 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:48.452 "strip_size_kb": 0, 00:09:48.452 "state": "configuring", 00:09:48.452 "raid_level": "raid1", 00:09:48.452 "superblock": true, 00:09:48.452 "num_base_bdevs": 3, 00:09:48.452 "num_base_bdevs_discovered": 1, 00:09:48.452 "num_base_bdevs_operational": 3, 00:09:48.452 "base_bdevs_list": [ 00:09:48.452 { 00:09:48.452 "name": "pt1", 00:09:48.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.452 "is_configured": true, 00:09:48.452 "data_offset": 2048, 00:09:48.452 "data_size": 63488 00:09:48.452 }, 00:09:48.452 { 00:09:48.452 "name": null, 00:09:48.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.452 "is_configured": false, 00:09:48.452 "data_offset": 2048, 00:09:48.452 "data_size": 63488 00:09:48.452 }, 00:09:48.452 { 00:09:48.452 "name": null, 00:09:48.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.452 "is_configured": false, 00:09:48.452 "data_offset": 2048, 00:09:48.452 "data_size": 63488 00:09:48.452 } 00:09:48.452 ] 00:09:48.452 }' 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.452 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 [2024-12-12 09:23:22.670350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.712 [2024-12-12 09:23:22.670435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.712 [2024-12-12 09:23:22.670463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:48.712 [2024-12-12 09:23:22.670472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.712 [2024-12-12 09:23:22.670999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.712 [2024-12-12 09:23:22.671019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.712 [2024-12-12 09:23:22.671125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.712 [2024-12-12 09:23:22.671156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.712 pt2 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 [2024-12-12 09:23:22.678322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.712 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.971 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.971 "name": "raid_bdev1", 00:09:48.971 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:48.971 "strip_size_kb": 0, 00:09:48.971 "state": "configuring", 00:09:48.971 "raid_level": "raid1", 00:09:48.971 "superblock": true, 00:09:48.971 "num_base_bdevs": 3, 00:09:48.971 "num_base_bdevs_discovered": 1, 00:09:48.971 "num_base_bdevs_operational": 3, 00:09:48.971 "base_bdevs_list": [ 00:09:48.971 { 00:09:48.971 "name": "pt1", 00:09:48.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.971 "is_configured": true, 00:09:48.971 "data_offset": 2048, 00:09:48.971 "data_size": 63488 00:09:48.971 }, 00:09:48.971 { 00:09:48.971 "name": null, 00:09:48.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.971 "is_configured": false, 00:09:48.971 "data_offset": 0, 00:09:48.971 "data_size": 63488 00:09:48.971 }, 00:09:48.971 { 00:09:48.971 "name": null, 00:09:48.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.972 "is_configured": false, 00:09:48.972 "data_offset": 2048, 00:09:48.972 "data_size": 63488 00:09:48.972 } 00:09:48.972 ] 00:09:48.972 }' 00:09:48.972 09:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.972 09:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.231 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.232 [2024-12-12 09:23:23.121546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.232 [2024-12-12 09:23:23.121635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.232 [2024-12-12 09:23:23.121657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:49.232 [2024-12-12 09:23:23.121672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.232 [2024-12-12 09:23:23.122245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.232 [2024-12-12 09:23:23.122277] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.232 [2024-12-12 09:23:23.122374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.232 [2024-12-12 09:23:23.122413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.232 pt2 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.232 [2024-12-12 09:23:23.133490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.232 [2024-12-12 09:23:23.133539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.232 [2024-12-12 09:23:23.133553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.232 [2024-12-12 09:23:23.133563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.232 [2024-12-12 09:23:23.133938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.232 [2024-12-12 09:23:23.133991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.232 [2024-12-12 09:23:23.134073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.232 [2024-12-12 09:23:23.134096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.232 [2024-12-12 09:23:23.134239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.232 [2024-12-12 09:23:23.134261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.232 [2024-12-12 09:23:23.134524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.232 [2024-12-12 09:23:23.134694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.232 [2024-12-12 09:23:23.134710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.232 [2024-12-12 09:23:23.134875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.232 pt3 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.232 "name": "raid_bdev1", 00:09:49.232 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:49.232 "strip_size_kb": 0, 00:09:49.232 "state": "online", 00:09:49.232 "raid_level": "raid1", 00:09:49.232 "superblock": true, 00:09:49.232 "num_base_bdevs": 3, 00:09:49.232 "num_base_bdevs_discovered": 3, 00:09:49.232 "num_base_bdevs_operational": 3, 00:09:49.232 "base_bdevs_list": [ 00:09:49.232 { 00:09:49.232 "name": "pt1", 00:09:49.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.232 "is_configured": true, 00:09:49.232 "data_offset": 2048, 00:09:49.232 "data_size": 63488 00:09:49.232 }, 00:09:49.232 { 00:09:49.232 "name": "pt2", 00:09:49.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.232 "is_configured": true, 00:09:49.232 "data_offset": 2048, 00:09:49.232 "data_size": 63488 00:09:49.232 }, 00:09:49.232 { 00:09:49.232 "name": "pt3", 00:09:49.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.232 "is_configured": true, 00:09:49.232 "data_offset": 2048, 00:09:49.232 "data_size": 63488 00:09:49.232 } 00:09:49.232 ] 00:09:49.232 }' 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.232 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.801 [2024-12-12 09:23:23.549145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.801 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.801 "name": "raid_bdev1", 00:09:49.802 "aliases": [ 00:09:49.802 "ff249b06-d189-4550-9c8c-8603e80820f5" 00:09:49.802 ], 00:09:49.802 "product_name": "Raid Volume", 00:09:49.802 "block_size": 512, 00:09:49.802 "num_blocks": 63488, 00:09:49.802 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:49.802 "assigned_rate_limits": { 00:09:49.802 "rw_ios_per_sec": 0, 00:09:49.802 "rw_mbytes_per_sec": 0, 00:09:49.802 "r_mbytes_per_sec": 0, 00:09:49.802 "w_mbytes_per_sec": 0 00:09:49.802 }, 00:09:49.802 "claimed": false, 00:09:49.802 "zoned": false, 00:09:49.802 "supported_io_types": { 00:09:49.802 "read": true, 00:09:49.802 "write": true, 00:09:49.802 "unmap": false, 00:09:49.802 "flush": false, 00:09:49.802 "reset": true, 00:09:49.802 "nvme_admin": false, 00:09:49.802 "nvme_io": false, 00:09:49.802 "nvme_io_md": false, 00:09:49.802 "write_zeroes": true, 00:09:49.802 "zcopy": false, 00:09:49.802 "get_zone_info": false, 00:09:49.802 "zone_management": false, 00:09:49.802 "zone_append": false, 00:09:49.802 "compare": false, 00:09:49.802 "compare_and_write": false, 00:09:49.802 "abort": false, 00:09:49.802 "seek_hole": false, 00:09:49.802 "seek_data": false, 00:09:49.802 "copy": false, 00:09:49.802 "nvme_iov_md": false 00:09:49.802 }, 00:09:49.802 "memory_domains": [ 00:09:49.802 { 00:09:49.802 "dma_device_id": "system", 00:09:49.802 "dma_device_type": 1 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.802 "dma_device_type": 2 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "dma_device_id": "system", 00:09:49.802 "dma_device_type": 1 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.802 "dma_device_type": 2 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "dma_device_id": "system", 00:09:49.802 "dma_device_type": 1 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.802 "dma_device_type": 2 00:09:49.802 } 00:09:49.802 ], 00:09:49.802 "driver_specific": { 00:09:49.802 "raid": { 00:09:49.802 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:49.802 "strip_size_kb": 0, 00:09:49.802 "state": "online", 00:09:49.802 "raid_level": "raid1", 00:09:49.802 "superblock": true, 00:09:49.802 "num_base_bdevs": 3, 00:09:49.802 "num_base_bdevs_discovered": 3, 00:09:49.802 "num_base_bdevs_operational": 3, 00:09:49.802 "base_bdevs_list": [ 00:09:49.802 { 00:09:49.802 "name": "pt1", 00:09:49.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.802 "is_configured": true, 00:09:49.802 "data_offset": 2048, 00:09:49.802 "data_size": 63488 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "name": "pt2", 00:09:49.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.802 "is_configured": true, 00:09:49.802 "data_offset": 2048, 00:09:49.802 "data_size": 63488 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "name": "pt3", 00:09:49.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.802 "is_configured": true, 00:09:49.802 "data_offset": 2048, 00:09:49.802 "data_size": 63488 00:09:49.802 } 00:09:49.802 ] 00:09:49.802 } 00:09:49.802 } 00:09:49.802 }' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.802 pt2 00:09:49.802 pt3' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.802 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.062 [2024-12-12 09:23:23.824561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ff249b06-d189-4550-9c8c-8603e80820f5 '!=' ff249b06-d189-4550-9c8c-8603e80820f5 ']' 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.062 [2024-12-12 09:23:23.872264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.062 "name": "raid_bdev1", 00:09:50.062 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:50.062 "strip_size_kb": 0, 00:09:50.062 "state": "online", 00:09:50.062 "raid_level": "raid1", 00:09:50.062 "superblock": true, 00:09:50.062 "num_base_bdevs": 3, 00:09:50.062 "num_base_bdevs_discovered": 2, 00:09:50.062 "num_base_bdevs_operational": 2, 00:09:50.062 "base_bdevs_list": [ 00:09:50.062 { 00:09:50.062 "name": null, 00:09:50.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.062 "is_configured": false, 00:09:50.062 "data_offset": 0, 00:09:50.062 "data_size": 63488 00:09:50.062 }, 00:09:50.062 { 00:09:50.062 "name": "pt2", 00:09:50.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.062 "is_configured": true, 00:09:50.062 "data_offset": 2048, 00:09:50.062 "data_size": 63488 00:09:50.062 }, 00:09:50.062 { 00:09:50.062 "name": "pt3", 00:09:50.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.062 "is_configured": true, 00:09:50.062 "data_offset": 2048, 00:09:50.062 "data_size": 63488 00:09:50.062 } 00:09:50.062 ] 00:09:50.062 }' 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.062 09:23:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 [2024-12-12 09:23:24.351454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.630 [2024-12-12 09:23:24.351518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.630 [2024-12-12 09:23:24.351618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.630 [2024-12-12 09:23:24.351719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.630 [2024-12-12 09:23:24.351807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 [2024-12-12 09:23:24.439287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.630 [2024-12-12 09:23:24.439339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.630 [2024-12-12 09:23:24.439355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:50.630 [2024-12-12 09:23:24.439368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.630 [2024-12-12 09:23:24.441849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.630 [2024-12-12 09:23:24.441890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.630 [2024-12-12 09:23:24.441981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.630 [2024-12-12 09:23:24.442045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.630 pt2 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.630 "name": "raid_bdev1", 00:09:50.630 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:50.630 "strip_size_kb": 0, 00:09:50.630 "state": "configuring", 00:09:50.630 "raid_level": "raid1", 00:09:50.630 "superblock": true, 00:09:50.630 "num_base_bdevs": 3, 00:09:50.630 "num_base_bdevs_discovered": 1, 00:09:50.630 "num_base_bdevs_operational": 2, 00:09:50.630 "base_bdevs_list": [ 00:09:50.630 { 00:09:50.630 "name": null, 00:09:50.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.630 "is_configured": false, 00:09:50.630 "data_offset": 2048, 00:09:50.630 "data_size": 63488 00:09:50.630 }, 00:09:50.630 { 00:09:50.630 "name": "pt2", 00:09:50.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.630 "is_configured": true, 00:09:50.630 "data_offset": 2048, 00:09:50.630 "data_size": 63488 00:09:50.630 }, 00:09:50.630 { 00:09:50.630 "name": null, 00:09:50.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.630 "is_configured": false, 00:09:50.630 "data_offset": 2048, 00:09:50.630 "data_size": 63488 00:09:50.630 } 00:09:50.630 ] 00:09:50.630 }' 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.630 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.896 [2024-12-12 09:23:24.902537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.896 [2024-12-12 09:23:24.902656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.896 [2024-12-12 09:23:24.902692] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:50.896 [2024-12-12 09:23:24.902722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.896 [2024-12-12 09:23:24.903240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.896 [2024-12-12 09:23:24.903302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.896 [2024-12-12 09:23:24.903429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:50.896 [2024-12-12 09:23:24.903487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.896 [2024-12-12 09:23:24.903637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.896 [2024-12-12 09:23:24.903686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.896 [2024-12-12 09:23:24.904008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:50.896 [2024-12-12 09:23:24.904214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.896 [2024-12-12 09:23:24.904255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.896 [2024-12-12 09:23:24.904456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.896 pt3 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.896 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.167 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.167 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.167 "name": "raid_bdev1", 00:09:51.167 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:51.167 "strip_size_kb": 0, 00:09:51.167 "state": "online", 00:09:51.167 "raid_level": "raid1", 00:09:51.167 "superblock": true, 00:09:51.167 "num_base_bdevs": 3, 00:09:51.167 "num_base_bdevs_discovered": 2, 00:09:51.167 "num_base_bdevs_operational": 2, 00:09:51.167 "base_bdevs_list": [ 00:09:51.167 { 00:09:51.167 "name": null, 00:09:51.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.167 "is_configured": false, 00:09:51.167 "data_offset": 2048, 00:09:51.167 "data_size": 63488 00:09:51.167 }, 00:09:51.167 { 00:09:51.167 "name": "pt2", 00:09:51.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.167 "is_configured": true, 00:09:51.167 "data_offset": 2048, 00:09:51.167 "data_size": 63488 00:09:51.167 }, 00:09:51.167 { 00:09:51.167 "name": "pt3", 00:09:51.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.167 "is_configured": true, 00:09:51.167 "data_offset": 2048, 00:09:51.167 "data_size": 63488 00:09:51.167 } 00:09:51.167 ] 00:09:51.167 }' 00:09:51.167 09:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.167 09:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 [2024-12-12 09:23:25.301838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.427 [2024-12-12 09:23:25.301865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.427 [2024-12-12 09:23:25.301930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.427 [2024-12-12 09:23:25.301996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.427 [2024-12-12 09:23:25.302005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 [2024-12-12 09:23:25.357770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.427 [2024-12-12 09:23:25.357817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.427 [2024-12-12 09:23:25.357835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:51.427 [2024-12-12 09:23:25.357843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.427 [2024-12-12 09:23:25.360303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.427 [2024-12-12 09:23:25.360335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.427 [2024-12-12 09:23:25.360406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.427 [2024-12-12 09:23:25.360450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.427 [2024-12-12 09:23:25.360567] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:51.427 [2024-12-12 09:23:25.360577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.427 [2024-12-12 09:23:25.360592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:51.427 [2024-12-12 09:23:25.360649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.427 pt1 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.427 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.427 "name": "raid_bdev1", 00:09:51.427 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:51.427 "strip_size_kb": 0, 00:09:51.427 "state": "configuring", 00:09:51.427 "raid_level": "raid1", 00:09:51.427 "superblock": true, 00:09:51.427 "num_base_bdevs": 3, 00:09:51.427 "num_base_bdevs_discovered": 1, 00:09:51.427 "num_base_bdevs_operational": 2, 00:09:51.427 "base_bdevs_list": [ 00:09:51.427 { 00:09:51.427 "name": null, 00:09:51.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.427 "is_configured": false, 00:09:51.427 "data_offset": 2048, 00:09:51.427 "data_size": 63488 00:09:51.427 }, 00:09:51.427 { 00:09:51.427 "name": "pt2", 00:09:51.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.427 "is_configured": true, 00:09:51.427 "data_offset": 2048, 00:09:51.427 "data_size": 63488 00:09:51.427 }, 00:09:51.427 { 00:09:51.427 "name": null, 00:09:51.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.427 "is_configured": false, 00:09:51.427 "data_offset": 2048, 00:09:51.427 "data_size": 63488 00:09:51.427 } 00:09:51.427 ] 00:09:51.427 }' 00:09:51.428 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.428 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.996 [2024-12-12 09:23:25.836989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:51.996 [2024-12-12 09:23:25.837044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.996 [2024-12-12 09:23:25.837068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:51.996 [2024-12-12 09:23:25.837077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.996 [2024-12-12 09:23:25.837570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.996 [2024-12-12 09:23:25.837591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:51.996 [2024-12-12 09:23:25.837670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:51.996 [2024-12-12 09:23:25.837691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:51.996 [2024-12-12 09:23:25.837826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:51.996 [2024-12-12 09:23:25.837834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.996 [2024-12-12 09:23:25.838105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:51.996 [2024-12-12 09:23:25.838271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:51.996 [2024-12-12 09:23:25.838286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:51.996 [2024-12-12 09:23:25.838430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.996 pt3 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.996 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.997 "name": "raid_bdev1", 00:09:51.997 "uuid": "ff249b06-d189-4550-9c8c-8603e80820f5", 00:09:51.997 "strip_size_kb": 0, 00:09:51.997 "state": "online", 00:09:51.997 "raid_level": "raid1", 00:09:51.997 "superblock": true, 00:09:51.997 "num_base_bdevs": 3, 00:09:51.997 "num_base_bdevs_discovered": 2, 00:09:51.997 "num_base_bdevs_operational": 2, 00:09:51.997 "base_bdevs_list": [ 00:09:51.997 { 00:09:51.997 "name": null, 00:09:51.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.997 "is_configured": false, 00:09:51.997 "data_offset": 2048, 00:09:51.997 "data_size": 63488 00:09:51.997 }, 00:09:51.997 { 00:09:51.997 "name": "pt2", 00:09:51.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.997 "is_configured": true, 00:09:51.997 "data_offset": 2048, 00:09:51.997 "data_size": 63488 00:09:51.997 }, 00:09:51.997 { 00:09:51.997 "name": "pt3", 00:09:51.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.997 "is_configured": true, 00:09:51.997 "data_offset": 2048, 00:09:51.997 "data_size": 63488 00:09:51.997 } 00:09:51.997 ] 00:09:51.997 }' 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.997 09:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:52.564 [2024-12-12 09:23:26.336360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ff249b06-d189-4550-9c8c-8603e80820f5 '!=' ff249b06-d189-4550-9c8c-8603e80820f5 ']' 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69783 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69783 ']' 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69783 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69783 00:09:52.564 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.565 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.565 killing process with pid 69783 00:09:52.565 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69783' 00:09:52.565 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69783 00:09:52.565 [2024-12-12 09:23:26.405694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.565 [2024-12-12 09:23:26.405776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.565 [2024-12-12 09:23:26.405832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.565 [2024-12-12 09:23:26.405848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:52.565 09:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69783 00:09:52.824 [2024-12-12 09:23:26.730175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.204 09:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:54.204 00:09:54.204 real 0m7.777s 00:09:54.204 user 0m11.992s 00:09:54.204 sys 0m1.433s 00:09:54.204 09:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.204 09:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.204 ************************************ 00:09:54.204 END TEST raid_superblock_test 00:09:54.204 ************************************ 00:09:54.204 09:23:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:54.204 09:23:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.204 09:23:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.204 09:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.204 ************************************ 00:09:54.204 START TEST raid_read_error_test 00:09:54.204 ************************************ 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.204 09:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ug32oBQRMu 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70231 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70231 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70231 ']' 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.204 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.204 [2024-12-12 09:23:28.107206] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:54.204 [2024-12-12 09:23:28.107321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:09:54.463 [2024-12-12 09:23:28.287043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.463 [2024-12-12 09:23:28.417789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.722 [2024-12-12 09:23:28.646927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.722 [2024-12-12 09:23:28.646995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.982 BaseBdev1_malloc 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.982 true 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.982 09:23:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.982 [2024-12-12 09:23:29.000289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.982 [2024-12-12 09:23:29.000344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.982 [2024-12-12 09:23:29.000363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.982 [2024-12-12 09:23:29.000376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.982 [2024-12-12 09:23:29.002796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.982 [2024-12-12 09:23:29.002832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.982 BaseBdev1 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 BaseBdev2_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 true 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 [2024-12-12 09:23:29.073271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.242 [2024-12-12 09:23:29.073317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.242 [2024-12-12 09:23:29.073332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.242 [2024-12-12 09:23:29.073342] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.242 [2024-12-12 09:23:29.075688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.242 [2024-12-12 09:23:29.075721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.242 BaseBdev2 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 BaseBdev3_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 true 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 [2024-12-12 09:23:29.154902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.242 [2024-12-12 09:23:29.154947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.242 [2024-12-12 09:23:29.154978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:55.242 [2024-12-12 09:23:29.154990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.242 [2024-12-12 09:23:29.157382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.242 [2024-12-12 09:23:29.157416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.242 BaseBdev3 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 [2024-12-12 09:23:29.166962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.242 [2024-12-12 09:23:29.169053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.242 [2024-12-12 09:23:29.169122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.242 [2024-12-12 09:23:29.169345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.242 [2024-12-12 09:23:29.169364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.242 [2024-12-12 09:23:29.169618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:55.242 [2024-12-12 09:23:29.169801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.242 [2024-12-12 09:23:29.169819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:55.242 [2024-12-12 09:23:29.169990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.242 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.243 "name": "raid_bdev1", 00:09:55.243 "uuid": "1b49c678-ccda-4165-bdeb-e3199e09c0c7", 00:09:55.243 "strip_size_kb": 0, 00:09:55.243 "state": "online", 00:09:55.243 "raid_level": "raid1", 00:09:55.243 "superblock": true, 00:09:55.243 "num_base_bdevs": 3, 00:09:55.243 "num_base_bdevs_discovered": 3, 00:09:55.243 "num_base_bdevs_operational": 3, 00:09:55.243 "base_bdevs_list": [ 00:09:55.243 { 00:09:55.243 "name": "BaseBdev1", 00:09:55.243 "uuid": "6c2c4f1a-aaea-59b9-b23c-64fd39c55e61", 00:09:55.243 "is_configured": true, 00:09:55.243 "data_offset": 2048, 00:09:55.243 "data_size": 63488 00:09:55.243 }, 00:09:55.243 { 00:09:55.243 "name": "BaseBdev2", 00:09:55.243 "uuid": "84846d41-709b-5cb0-9387-9cf96f50a706", 00:09:55.243 "is_configured": true, 00:09:55.243 "data_offset": 2048, 00:09:55.243 "data_size": 63488 00:09:55.243 }, 00:09:55.243 { 00:09:55.243 "name": "BaseBdev3", 00:09:55.243 "uuid": "2e007ac1-7195-5c9c-a376-288a88e99681", 00:09:55.243 "is_configured": true, 00:09:55.243 "data_offset": 2048, 00:09:55.243 "data_size": 63488 00:09:55.243 } 00:09:55.243 ] 00:09:55.243 }' 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.243 09:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.811 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.811 09:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.811 [2024-12-12 09:23:29.683371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.750 "name": "raid_bdev1", 00:09:56.750 "uuid": "1b49c678-ccda-4165-bdeb-e3199e09c0c7", 00:09:56.750 "strip_size_kb": 0, 00:09:56.750 "state": "online", 00:09:56.750 "raid_level": "raid1", 00:09:56.750 "superblock": true, 00:09:56.750 "num_base_bdevs": 3, 00:09:56.750 "num_base_bdevs_discovered": 3, 00:09:56.750 "num_base_bdevs_operational": 3, 00:09:56.750 "base_bdevs_list": [ 00:09:56.750 { 00:09:56.750 "name": "BaseBdev1", 00:09:56.750 "uuid": "6c2c4f1a-aaea-59b9-b23c-64fd39c55e61", 00:09:56.750 "is_configured": true, 00:09:56.750 "data_offset": 2048, 00:09:56.750 "data_size": 63488 00:09:56.750 }, 00:09:56.750 { 00:09:56.750 "name": "BaseBdev2", 00:09:56.750 "uuid": "84846d41-709b-5cb0-9387-9cf96f50a706", 00:09:56.750 "is_configured": true, 00:09:56.750 "data_offset": 2048, 00:09:56.750 "data_size": 63488 00:09:56.750 }, 00:09:56.750 { 00:09:56.750 "name": "BaseBdev3", 00:09:56.750 "uuid": "2e007ac1-7195-5c9c-a376-288a88e99681", 00:09:56.750 "is_configured": true, 00:09:56.750 "data_offset": 2048, 00:09:56.750 "data_size": 63488 00:09:56.750 } 00:09:56.750 ] 00:09:56.750 }' 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.750 09:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 [2024-12-12 09:23:31.097528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.319 [2024-12-12 09:23:31.097577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.319 [2024-12-12 09:23:31.100331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.319 [2024-12-12 09:23:31.100386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.319 [2024-12-12 09:23:31.100520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.319 [2024-12-12 09:23:31.100536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:57.319 { 00:09:57.319 "results": [ 00:09:57.319 { 00:09:57.319 "job": "raid_bdev1", 00:09:57.319 "core_mask": "0x1", 00:09:57.319 "workload": "randrw", 00:09:57.319 "percentage": 50, 00:09:57.319 "status": "finished", 00:09:57.319 "queue_depth": 1, 00:09:57.319 "io_size": 131072, 00:09:57.319 "runtime": 1.415053, 00:09:57.319 "iops": 10275.940194466215, 00:09:57.319 "mibps": 1284.4925243082769, 00:09:57.319 "io_failed": 0, 00:09:57.319 "io_timeout": 0, 00:09:57.319 "avg_latency_us": 94.75227552630133, 00:09:57.319 "min_latency_us": 23.811353711790392, 00:09:57.319 "max_latency_us": 1509.6174672489083 00:09:57.319 } 00:09:57.319 ], 00:09:57.319 "core_count": 1 00:09:57.319 } 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70231 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70231 ']' 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70231 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70231 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.319 killing process with pid 70231 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70231' 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70231 00:09:57.319 [2024-12-12 09:23:31.146399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.319 09:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70231 00:09:57.579 [2024-12-12 09:23:31.393022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ug32oBQRMu 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:58.958 00:09:58.958 real 0m4.667s 00:09:58.958 user 0m5.426s 00:09:58.958 sys 0m0.665s 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.958 09:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 ************************************ 00:09:58.958 END TEST raid_read_error_test 00:09:58.958 ************************************ 00:09:58.958 09:23:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:58.958 09:23:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.958 09:23:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.958 09:23:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 ************************************ 00:09:58.958 START TEST raid_write_error_test 00:09:58.958 ************************************ 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Wmp6qTrJNI 00:09:58.958 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70377 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70377 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70377 ']' 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.959 09:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 [2024-12-12 09:23:32.839912] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:58.959 [2024-12-12 09:23:32.840036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70377 ] 00:09:59.219 [2024-12-12 09:23:33.013292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.219 [2024-12-12 09:23:33.145472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.478 [2024-12-12 09:23:33.373805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.478 [2024-12-12 09:23:33.373849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.738 BaseBdev1_malloc 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.738 true 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.738 [2024-12-12 09:23:33.719111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:59.738 [2024-12-12 09:23:33.719167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.738 [2024-12-12 09:23:33.719187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:59.738 [2024-12-12 09:23:33.719198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.738 [2024-12-12 09:23:33.721542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.738 [2024-12-12 09:23:33.721588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:59.738 BaseBdev1 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.738 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.998 BaseBdev2_malloc 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 true 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 [2024-12-12 09:23:33.790156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:59.999 [2024-12-12 09:23:33.790206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.999 [2024-12-12 09:23:33.790221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:59.999 [2024-12-12 09:23:33.790231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.999 [2024-12-12 09:23:33.792583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.999 [2024-12-12 09:23:33.792621] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:59.999 BaseBdev2 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 BaseBdev3_malloc 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 true 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 [2024-12-12 09:23:33.877626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.999 [2024-12-12 09:23:33.877675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.999 [2024-12-12 09:23:33.877691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:59.999 [2024-12-12 09:23:33.877701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.999 [2024-12-12 09:23:33.880101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.999 [2024-12-12 09:23:33.880194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.999 BaseBdev3 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 [2024-12-12 09:23:33.889678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.999 [2024-12-12 09:23:33.891782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.999 [2024-12-12 09:23:33.891867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.999 [2024-12-12 09:23:33.892111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.999 [2024-12-12 09:23:33.892125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.999 [2024-12-12 09:23:33.892368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:59.999 [2024-12-12 09:23:33.892546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.999 [2024-12-12 09:23:33.892559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:59.999 [2024-12-12 09:23:33.892709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.999 "name": "raid_bdev1", 00:09:59.999 "uuid": "ea3ed357-694f-4595-a39a-d814ed75bc1a", 00:09:59.999 "strip_size_kb": 0, 00:09:59.999 "state": "online", 00:09:59.999 "raid_level": "raid1", 00:09:59.999 "superblock": true, 00:09:59.999 "num_base_bdevs": 3, 00:09:59.999 "num_base_bdevs_discovered": 3, 00:09:59.999 "num_base_bdevs_operational": 3, 00:09:59.999 "base_bdevs_list": [ 00:09:59.999 { 00:09:59.999 "name": "BaseBdev1", 00:09:59.999 "uuid": "35a518e2-99b1-5440-b6bc-1988ce4174f6", 00:09:59.999 "is_configured": true, 00:09:59.999 "data_offset": 2048, 00:09:59.999 "data_size": 63488 00:09:59.999 }, 00:09:59.999 { 00:09:59.999 "name": "BaseBdev2", 00:09:59.999 "uuid": "6f7b2b99-a651-5474-81b8-59367bc9bb67", 00:09:59.999 "is_configured": true, 00:09:59.999 "data_offset": 2048, 00:09:59.999 "data_size": 63488 00:09:59.999 }, 00:09:59.999 { 00:09:59.999 "name": "BaseBdev3", 00:09:59.999 "uuid": "46f907dc-1453-534a-81e4-4afe0eb66534", 00:09:59.999 "is_configured": true, 00:09:59.999 "data_offset": 2048, 00:09:59.999 "data_size": 63488 00:09:59.999 } 00:09:59.999 ] 00:09:59.999 }' 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.999 09:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.568 09:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:00.568 09:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:00.568 [2024-12-12 09:23:34.406182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.507 [2024-12-12 09:23:35.339121] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:01.507 [2024-12-12 09:23:35.339275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.507 [2024-12-12 09:23:35.339538] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.507 "name": "raid_bdev1", 00:10:01.507 "uuid": "ea3ed357-694f-4595-a39a-d814ed75bc1a", 00:10:01.507 "strip_size_kb": 0, 00:10:01.507 "state": "online", 00:10:01.507 "raid_level": "raid1", 00:10:01.507 "superblock": true, 00:10:01.507 "num_base_bdevs": 3, 00:10:01.507 "num_base_bdevs_discovered": 2, 00:10:01.507 "num_base_bdevs_operational": 2, 00:10:01.507 "base_bdevs_list": [ 00:10:01.507 { 00:10:01.507 "name": null, 00:10:01.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.507 "is_configured": false, 00:10:01.507 "data_offset": 0, 00:10:01.507 "data_size": 63488 00:10:01.507 }, 00:10:01.507 { 00:10:01.507 "name": "BaseBdev2", 00:10:01.507 "uuid": "6f7b2b99-a651-5474-81b8-59367bc9bb67", 00:10:01.507 "is_configured": true, 00:10:01.507 "data_offset": 2048, 00:10:01.507 "data_size": 63488 00:10:01.507 }, 00:10:01.507 { 00:10:01.507 "name": "BaseBdev3", 00:10:01.507 "uuid": "46f907dc-1453-534a-81e4-4afe0eb66534", 00:10:01.507 "is_configured": true, 00:10:01.507 "data_offset": 2048, 00:10:01.507 "data_size": 63488 00:10:01.507 } 00:10:01.507 ] 00:10:01.507 }' 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.507 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.075 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.076 [2024-12-12 09:23:35.847732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.076 [2024-12-12 09:23:35.847781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.076 [2024-12-12 09:23:35.850403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.076 [2024-12-12 09:23:35.850538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.076 [2024-12-12 09:23:35.850649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.076 [2024-12-12 09:23:35.850666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:02.076 { 00:10:02.076 "results": [ 00:10:02.076 { 00:10:02.076 "job": "raid_bdev1", 00:10:02.076 "core_mask": "0x1", 00:10:02.076 "workload": "randrw", 00:10:02.076 "percentage": 50, 00:10:02.076 "status": "finished", 00:10:02.076 "queue_depth": 1, 00:10:02.076 "io_size": 131072, 00:10:02.076 "runtime": 1.442333, 00:10:02.076 "iops": 11894.617955770269, 00:10:02.076 "mibps": 1486.8272444712836, 00:10:02.076 "io_failed": 0, 00:10:02.076 "io_timeout": 0, 00:10:02.076 "avg_latency_us": 81.48176680265654, 00:10:02.076 "min_latency_us": 22.805240174672488, 00:10:02.076 "max_latency_us": 1395.1441048034935 00:10:02.076 } 00:10:02.076 ], 00:10:02.076 "core_count": 1 00:10:02.076 } 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70377 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70377 ']' 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70377 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70377 00:10:02.076 killing process with pid 70377 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70377' 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70377 00:10:02.076 [2024-12-12 09:23:35.896445] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.076 09:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70377 00:10:02.335 [2024-12-12 09:23:36.142926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.715 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Wmp6qTrJNI 00:10:03.715 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.715 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.715 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:03.716 00:10:03.716 real 0m4.677s 00:10:03.716 user 0m5.441s 00:10:03.716 sys 0m0.660s 00:10:03.716 ************************************ 00:10:03.716 END TEST raid_write_error_test 00:10:03.716 ************************************ 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.716 09:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.716 09:23:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:03.716 09:23:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:03.716 09:23:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:03.716 09:23:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.716 09:23:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.716 09:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.716 ************************************ 00:10:03.716 START TEST raid_state_function_test 00:10:03.716 ************************************ 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70516 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70516' 00:10:03.716 Process raid pid: 70516 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70516 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 70516 ']' 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.716 09:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.716 [2024-12-12 09:23:37.585436] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:03.716 [2024-12-12 09:23:37.585548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.976 [2024-12-12 09:23:37.766387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.976 [2024-12-12 09:23:37.900194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.237 [2024-12-12 09:23:38.135040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.237 [2024-12-12 09:23:38.135083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 [2024-12-12 09:23:38.404566] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.497 [2024-12-12 09:23:38.404628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.497 [2024-12-12 09:23:38.404639] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.497 [2024-12-12 09:23:38.404649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.497 [2024-12-12 09:23:38.404655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.497 [2024-12-12 09:23:38.404664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.497 [2024-12-12 09:23:38.404670] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.497 [2024-12-12 09:23:38.404678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.497 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.497 "name": "Existed_Raid", 00:10:04.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.497 "strip_size_kb": 64, 00:10:04.497 "state": "configuring", 00:10:04.497 "raid_level": "raid0", 00:10:04.497 "superblock": false, 00:10:04.497 "num_base_bdevs": 4, 00:10:04.497 "num_base_bdevs_discovered": 0, 00:10:04.497 "num_base_bdevs_operational": 4, 00:10:04.497 "base_bdevs_list": [ 00:10:04.497 { 00:10:04.497 "name": "BaseBdev1", 00:10:04.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.497 "is_configured": false, 00:10:04.497 "data_offset": 0, 00:10:04.497 "data_size": 0 00:10:04.497 }, 00:10:04.497 { 00:10:04.497 "name": "BaseBdev2", 00:10:04.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.498 "is_configured": false, 00:10:04.498 "data_offset": 0, 00:10:04.498 "data_size": 0 00:10:04.498 }, 00:10:04.498 { 00:10:04.498 "name": "BaseBdev3", 00:10:04.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.498 "is_configured": false, 00:10:04.498 "data_offset": 0, 00:10:04.498 "data_size": 0 00:10:04.498 }, 00:10:04.498 { 00:10:04.498 "name": "BaseBdev4", 00:10:04.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.498 "is_configured": false, 00:10:04.498 "data_offset": 0, 00:10:04.498 "data_size": 0 00:10:04.498 } 00:10:04.498 ] 00:10:04.498 }' 00:10:04.498 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.498 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 [2024-12-12 09:23:38.871775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.068 [2024-12-12 09:23:38.871813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 [2024-12-12 09:23:38.883743] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.068 [2024-12-12 09:23:38.883781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.068 [2024-12-12 09:23:38.883788] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.068 [2024-12-12 09:23:38.883798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.068 [2024-12-12 09:23:38.883804] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.068 [2024-12-12 09:23:38.883813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.068 [2024-12-12 09:23:38.883818] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.068 [2024-12-12 09:23:38.883827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 [2024-12-12 09:23:38.936479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.068 BaseBdev1 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 [ 00:10:05.068 { 00:10:05.068 "name": "BaseBdev1", 00:10:05.068 "aliases": [ 00:10:05.068 "db6a16d9-8df3-4b8d-8b76-a7e4195a8740" 00:10:05.068 ], 00:10:05.068 "product_name": "Malloc disk", 00:10:05.068 "block_size": 512, 00:10:05.068 "num_blocks": 65536, 00:10:05.068 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:05.068 "assigned_rate_limits": { 00:10:05.068 "rw_ios_per_sec": 0, 00:10:05.068 "rw_mbytes_per_sec": 0, 00:10:05.068 "r_mbytes_per_sec": 0, 00:10:05.068 "w_mbytes_per_sec": 0 00:10:05.068 }, 00:10:05.068 "claimed": true, 00:10:05.068 "claim_type": "exclusive_write", 00:10:05.068 "zoned": false, 00:10:05.068 "supported_io_types": { 00:10:05.068 "read": true, 00:10:05.068 "write": true, 00:10:05.068 "unmap": true, 00:10:05.068 "flush": true, 00:10:05.068 "reset": true, 00:10:05.068 "nvme_admin": false, 00:10:05.068 "nvme_io": false, 00:10:05.068 "nvme_io_md": false, 00:10:05.068 "write_zeroes": true, 00:10:05.068 "zcopy": true, 00:10:05.068 "get_zone_info": false, 00:10:05.068 "zone_management": false, 00:10:05.068 "zone_append": false, 00:10:05.068 "compare": false, 00:10:05.068 "compare_and_write": false, 00:10:05.068 "abort": true, 00:10:05.068 "seek_hole": false, 00:10:05.068 "seek_data": false, 00:10:05.068 "copy": true, 00:10:05.068 "nvme_iov_md": false 00:10:05.068 }, 00:10:05.068 "memory_domains": [ 00:10:05.068 { 00:10:05.068 "dma_device_id": "system", 00:10:05.068 "dma_device_type": 1 00:10:05.068 }, 00:10:05.068 { 00:10:05.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.068 "dma_device_type": 2 00:10:05.068 } 00:10:05.068 ], 00:10:05.068 "driver_specific": {} 00:10:05.068 } 00:10:05.068 ] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.068 09:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.068 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.068 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.068 "name": "Existed_Raid", 00:10:05.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.068 "strip_size_kb": 64, 00:10:05.068 "state": "configuring", 00:10:05.068 "raid_level": "raid0", 00:10:05.068 "superblock": false, 00:10:05.068 "num_base_bdevs": 4, 00:10:05.068 "num_base_bdevs_discovered": 1, 00:10:05.068 "num_base_bdevs_operational": 4, 00:10:05.068 "base_bdevs_list": [ 00:10:05.068 { 00:10:05.068 "name": "BaseBdev1", 00:10:05.069 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:05.069 "is_configured": true, 00:10:05.069 "data_offset": 0, 00:10:05.069 "data_size": 65536 00:10:05.069 }, 00:10:05.069 { 00:10:05.069 "name": "BaseBdev2", 00:10:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.069 "is_configured": false, 00:10:05.069 "data_offset": 0, 00:10:05.069 "data_size": 0 00:10:05.069 }, 00:10:05.069 { 00:10:05.069 "name": "BaseBdev3", 00:10:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.069 "is_configured": false, 00:10:05.069 "data_offset": 0, 00:10:05.069 "data_size": 0 00:10:05.069 }, 00:10:05.069 { 00:10:05.069 "name": "BaseBdev4", 00:10:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.069 "is_configured": false, 00:10:05.069 "data_offset": 0, 00:10:05.069 "data_size": 0 00:10:05.069 } 00:10:05.069 ] 00:10:05.069 }' 00:10:05.069 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.069 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.647 [2024-12-12 09:23:39.403762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.647 [2024-12-12 09:23:39.403871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.647 [2024-12-12 09:23:39.411833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.647 [2024-12-12 09:23:39.413940] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.647 [2024-12-12 09:23:39.414029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.647 [2024-12-12 09:23:39.414058] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.647 [2024-12-12 09:23:39.414083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.647 [2024-12-12 09:23:39.414101] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.647 [2024-12-12 09:23:39.414121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.647 "name": "Existed_Raid", 00:10:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.647 "strip_size_kb": 64, 00:10:05.647 "state": "configuring", 00:10:05.647 "raid_level": "raid0", 00:10:05.647 "superblock": false, 00:10:05.647 "num_base_bdevs": 4, 00:10:05.647 "num_base_bdevs_discovered": 1, 00:10:05.647 "num_base_bdevs_operational": 4, 00:10:05.647 "base_bdevs_list": [ 00:10:05.647 { 00:10:05.647 "name": "BaseBdev1", 00:10:05.647 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:05.647 "is_configured": true, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 65536 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev2", 00:10:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.647 "is_configured": false, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 0 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev3", 00:10:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.647 "is_configured": false, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 0 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev4", 00:10:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.647 "is_configured": false, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 0 00:10:05.647 } 00:10:05.647 ] 00:10:05.647 }' 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.647 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.926 [2024-12-12 09:23:39.903529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.926 BaseBdev2 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.926 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.927 [ 00:10:05.927 { 00:10:05.927 "name": "BaseBdev2", 00:10:05.927 "aliases": [ 00:10:05.927 "0c39bd65-a19a-4bcc-9ddb-04dc13a64010" 00:10:05.927 ], 00:10:05.927 "product_name": "Malloc disk", 00:10:05.927 "block_size": 512, 00:10:05.927 "num_blocks": 65536, 00:10:05.927 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:05.927 "assigned_rate_limits": { 00:10:05.927 "rw_ios_per_sec": 0, 00:10:05.927 "rw_mbytes_per_sec": 0, 00:10:05.927 "r_mbytes_per_sec": 0, 00:10:05.927 "w_mbytes_per_sec": 0 00:10:05.927 }, 00:10:05.927 "claimed": true, 00:10:05.927 "claim_type": "exclusive_write", 00:10:05.927 "zoned": false, 00:10:05.927 "supported_io_types": { 00:10:05.927 "read": true, 00:10:05.927 "write": true, 00:10:05.927 "unmap": true, 00:10:05.927 "flush": true, 00:10:05.927 "reset": true, 00:10:05.927 "nvme_admin": false, 00:10:05.927 "nvme_io": false, 00:10:05.927 "nvme_io_md": false, 00:10:05.927 "write_zeroes": true, 00:10:05.927 "zcopy": true, 00:10:05.927 "get_zone_info": false, 00:10:05.927 "zone_management": false, 00:10:05.927 "zone_append": false, 00:10:05.927 "compare": false, 00:10:05.927 "compare_and_write": false, 00:10:05.927 "abort": true, 00:10:05.927 "seek_hole": false, 00:10:05.927 "seek_data": false, 00:10:05.927 "copy": true, 00:10:05.927 "nvme_iov_md": false 00:10:05.927 }, 00:10:05.927 "memory_domains": [ 00:10:05.927 { 00:10:05.927 "dma_device_id": "system", 00:10:05.927 "dma_device_type": 1 00:10:05.927 }, 00:10:05.927 { 00:10:05.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.927 "dma_device_type": 2 00:10:05.927 } 00:10:05.927 ], 00:10:05.927 "driver_specific": {} 00:10:05.927 } 00:10:05.927 ] 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.927 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.193 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.193 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.193 "name": "Existed_Raid", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.193 "strip_size_kb": 64, 00:10:06.193 "state": "configuring", 00:10:06.193 "raid_level": "raid0", 00:10:06.193 "superblock": false, 00:10:06.193 "num_base_bdevs": 4, 00:10:06.193 "num_base_bdevs_discovered": 2, 00:10:06.193 "num_base_bdevs_operational": 4, 00:10:06.193 "base_bdevs_list": [ 00:10:06.193 { 00:10:06.193 "name": "BaseBdev1", 00:10:06.193 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 0, 00:10:06.193 "data_size": 65536 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "BaseBdev2", 00:10:06.193 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:06.193 "is_configured": true, 00:10:06.193 "data_offset": 0, 00:10:06.193 "data_size": 65536 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "BaseBdev3", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.193 "is_configured": false, 00:10:06.193 "data_offset": 0, 00:10:06.193 "data_size": 0 00:10:06.193 }, 00:10:06.193 { 00:10:06.193 "name": "BaseBdev4", 00:10:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.193 "is_configured": false, 00:10:06.193 "data_offset": 0, 00:10:06.193 "data_size": 0 00:10:06.193 } 00:10:06.193 ] 00:10:06.193 }' 00:10:06.193 09:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.193 09:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 [2024-12-12 09:23:40.406110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.454 BaseBdev3 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 [ 00:10:06.454 { 00:10:06.454 "name": "BaseBdev3", 00:10:06.454 "aliases": [ 00:10:06.454 "6872a4af-f74e-4d6e-b585-470b29bd75df" 00:10:06.454 ], 00:10:06.454 "product_name": "Malloc disk", 00:10:06.454 "block_size": 512, 00:10:06.454 "num_blocks": 65536, 00:10:06.454 "uuid": "6872a4af-f74e-4d6e-b585-470b29bd75df", 00:10:06.454 "assigned_rate_limits": { 00:10:06.454 "rw_ios_per_sec": 0, 00:10:06.454 "rw_mbytes_per_sec": 0, 00:10:06.454 "r_mbytes_per_sec": 0, 00:10:06.454 "w_mbytes_per_sec": 0 00:10:06.454 }, 00:10:06.454 "claimed": true, 00:10:06.454 "claim_type": "exclusive_write", 00:10:06.454 "zoned": false, 00:10:06.454 "supported_io_types": { 00:10:06.454 "read": true, 00:10:06.454 "write": true, 00:10:06.454 "unmap": true, 00:10:06.454 "flush": true, 00:10:06.454 "reset": true, 00:10:06.454 "nvme_admin": false, 00:10:06.454 "nvme_io": false, 00:10:06.454 "nvme_io_md": false, 00:10:06.454 "write_zeroes": true, 00:10:06.454 "zcopy": true, 00:10:06.454 "get_zone_info": false, 00:10:06.454 "zone_management": false, 00:10:06.454 "zone_append": false, 00:10:06.454 "compare": false, 00:10:06.454 "compare_and_write": false, 00:10:06.454 "abort": true, 00:10:06.454 "seek_hole": false, 00:10:06.454 "seek_data": false, 00:10:06.454 "copy": true, 00:10:06.454 "nvme_iov_md": false 00:10:06.454 }, 00:10:06.454 "memory_domains": [ 00:10:06.454 { 00:10:06.454 "dma_device_id": "system", 00:10:06.454 "dma_device_type": 1 00:10:06.454 }, 00:10:06.454 { 00:10:06.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.454 "dma_device_type": 2 00:10:06.454 } 00:10:06.454 ], 00:10:06.454 "driver_specific": {} 00:10:06.454 } 00:10:06.454 ] 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.713 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.713 "name": "Existed_Raid", 00:10:06.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.713 "strip_size_kb": 64, 00:10:06.713 "state": "configuring", 00:10:06.713 "raid_level": "raid0", 00:10:06.713 "superblock": false, 00:10:06.713 "num_base_bdevs": 4, 00:10:06.713 "num_base_bdevs_discovered": 3, 00:10:06.713 "num_base_bdevs_operational": 4, 00:10:06.713 "base_bdevs_list": [ 00:10:06.713 { 00:10:06.713 "name": "BaseBdev1", 00:10:06.713 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:06.713 "is_configured": true, 00:10:06.713 "data_offset": 0, 00:10:06.713 "data_size": 65536 00:10:06.713 }, 00:10:06.713 { 00:10:06.713 "name": "BaseBdev2", 00:10:06.713 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:06.713 "is_configured": true, 00:10:06.714 "data_offset": 0, 00:10:06.714 "data_size": 65536 00:10:06.714 }, 00:10:06.714 { 00:10:06.714 "name": "BaseBdev3", 00:10:06.714 "uuid": "6872a4af-f74e-4d6e-b585-470b29bd75df", 00:10:06.714 "is_configured": true, 00:10:06.714 "data_offset": 0, 00:10:06.714 "data_size": 65536 00:10:06.714 }, 00:10:06.714 { 00:10:06.714 "name": "BaseBdev4", 00:10:06.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.714 "is_configured": false, 00:10:06.714 "data_offset": 0, 00:10:06.714 "data_size": 0 00:10:06.714 } 00:10:06.714 ] 00:10:06.714 }' 00:10:06.714 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.714 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 [2024-12-12 09:23:40.934762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.974 [2024-12-12 09:23:40.934884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.974 [2024-12-12 09:23:40.934900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:06.974 [2024-12-12 09:23:40.935252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:06.974 [2024-12-12 09:23:40.935441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.974 [2024-12-12 09:23:40.935455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.974 [2024-12-12 09:23:40.935760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.974 BaseBdev4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 [ 00:10:06.974 { 00:10:06.974 "name": "BaseBdev4", 00:10:06.974 "aliases": [ 00:10:06.974 "219ce9a0-dff4-4a10-a3d6-120a284ae679" 00:10:06.974 ], 00:10:06.974 "product_name": "Malloc disk", 00:10:06.974 "block_size": 512, 00:10:06.974 "num_blocks": 65536, 00:10:06.974 "uuid": "219ce9a0-dff4-4a10-a3d6-120a284ae679", 00:10:06.974 "assigned_rate_limits": { 00:10:06.974 "rw_ios_per_sec": 0, 00:10:06.974 "rw_mbytes_per_sec": 0, 00:10:06.974 "r_mbytes_per_sec": 0, 00:10:06.974 "w_mbytes_per_sec": 0 00:10:06.974 }, 00:10:06.974 "claimed": true, 00:10:06.974 "claim_type": "exclusive_write", 00:10:06.974 "zoned": false, 00:10:06.974 "supported_io_types": { 00:10:06.974 "read": true, 00:10:06.974 "write": true, 00:10:06.974 "unmap": true, 00:10:06.974 "flush": true, 00:10:06.974 "reset": true, 00:10:06.974 "nvme_admin": false, 00:10:06.974 "nvme_io": false, 00:10:06.974 "nvme_io_md": false, 00:10:06.974 "write_zeroes": true, 00:10:06.974 "zcopy": true, 00:10:06.974 "get_zone_info": false, 00:10:06.974 "zone_management": false, 00:10:06.974 "zone_append": false, 00:10:06.974 "compare": false, 00:10:06.974 "compare_and_write": false, 00:10:06.974 "abort": true, 00:10:06.974 "seek_hole": false, 00:10:06.974 "seek_data": false, 00:10:06.974 "copy": true, 00:10:06.974 "nvme_iov_md": false 00:10:06.974 }, 00:10:06.974 "memory_domains": [ 00:10:06.974 { 00:10:06.974 "dma_device_id": "system", 00:10:06.974 "dma_device_type": 1 00:10:06.974 }, 00:10:06.974 { 00:10:06.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.974 "dma_device_type": 2 00:10:06.974 } 00:10:06.974 ], 00:10:06.974 "driver_specific": {} 00:10:06.974 } 00:10:06.974 ] 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.974 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.234 09:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.234 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.234 "name": "Existed_Raid", 00:10:07.234 "uuid": "756ec856-3b2b-4c1e-8c9b-3581ff676cd0", 00:10:07.234 "strip_size_kb": 64, 00:10:07.234 "state": "online", 00:10:07.234 "raid_level": "raid0", 00:10:07.234 "superblock": false, 00:10:07.234 "num_base_bdevs": 4, 00:10:07.234 "num_base_bdevs_discovered": 4, 00:10:07.234 "num_base_bdevs_operational": 4, 00:10:07.234 "base_bdevs_list": [ 00:10:07.234 { 00:10:07.234 "name": "BaseBdev1", 00:10:07.234 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 65536 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": "BaseBdev2", 00:10:07.234 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 65536 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": "BaseBdev3", 00:10:07.234 "uuid": "6872a4af-f74e-4d6e-b585-470b29bd75df", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 65536 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": "BaseBdev4", 00:10:07.234 "uuid": "219ce9a0-dff4-4a10-a3d6-120a284ae679", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 65536 00:10:07.234 } 00:10:07.234 ] 00:10:07.234 }' 00:10:07.234 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.234 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.493 [2024-12-12 09:23:41.418343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.493 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.493 "name": "Existed_Raid", 00:10:07.493 "aliases": [ 00:10:07.493 "756ec856-3b2b-4c1e-8c9b-3581ff676cd0" 00:10:07.493 ], 00:10:07.493 "product_name": "Raid Volume", 00:10:07.493 "block_size": 512, 00:10:07.493 "num_blocks": 262144, 00:10:07.493 "uuid": "756ec856-3b2b-4c1e-8c9b-3581ff676cd0", 00:10:07.493 "assigned_rate_limits": { 00:10:07.493 "rw_ios_per_sec": 0, 00:10:07.493 "rw_mbytes_per_sec": 0, 00:10:07.493 "r_mbytes_per_sec": 0, 00:10:07.493 "w_mbytes_per_sec": 0 00:10:07.493 }, 00:10:07.493 "claimed": false, 00:10:07.493 "zoned": false, 00:10:07.493 "supported_io_types": { 00:10:07.493 "read": true, 00:10:07.493 "write": true, 00:10:07.493 "unmap": true, 00:10:07.493 "flush": true, 00:10:07.493 "reset": true, 00:10:07.493 "nvme_admin": false, 00:10:07.493 "nvme_io": false, 00:10:07.493 "nvme_io_md": false, 00:10:07.493 "write_zeroes": true, 00:10:07.493 "zcopy": false, 00:10:07.493 "get_zone_info": false, 00:10:07.493 "zone_management": false, 00:10:07.493 "zone_append": false, 00:10:07.493 "compare": false, 00:10:07.493 "compare_and_write": false, 00:10:07.493 "abort": false, 00:10:07.493 "seek_hole": false, 00:10:07.493 "seek_data": false, 00:10:07.493 "copy": false, 00:10:07.493 "nvme_iov_md": false 00:10:07.493 }, 00:10:07.493 "memory_domains": [ 00:10:07.493 { 00:10:07.493 "dma_device_id": "system", 00:10:07.493 "dma_device_type": 1 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.493 "dma_device_type": 2 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "system", 00:10:07.493 "dma_device_type": 1 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.493 "dma_device_type": 2 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "system", 00:10:07.493 "dma_device_type": 1 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.493 "dma_device_type": 2 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "system", 00:10:07.493 "dma_device_type": 1 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.493 "dma_device_type": 2 00:10:07.493 } 00:10:07.493 ], 00:10:07.493 "driver_specific": { 00:10:07.493 "raid": { 00:10:07.493 "uuid": "756ec856-3b2b-4c1e-8c9b-3581ff676cd0", 00:10:07.493 "strip_size_kb": 64, 00:10:07.493 "state": "online", 00:10:07.493 "raid_level": "raid0", 00:10:07.493 "superblock": false, 00:10:07.493 "num_base_bdevs": 4, 00:10:07.493 "num_base_bdevs_discovered": 4, 00:10:07.493 "num_base_bdevs_operational": 4, 00:10:07.493 "base_bdevs_list": [ 00:10:07.493 { 00:10:07.493 "name": "BaseBdev1", 00:10:07.493 "uuid": "db6a16d9-8df3-4b8d-8b76-a7e4195a8740", 00:10:07.493 "is_configured": true, 00:10:07.493 "data_offset": 0, 00:10:07.493 "data_size": 65536 00:10:07.493 }, 00:10:07.493 { 00:10:07.493 "name": "BaseBdev2", 00:10:07.493 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:07.494 "is_configured": true, 00:10:07.494 "data_offset": 0, 00:10:07.494 "data_size": 65536 00:10:07.494 }, 00:10:07.494 { 00:10:07.494 "name": "BaseBdev3", 00:10:07.494 "uuid": "6872a4af-f74e-4d6e-b585-470b29bd75df", 00:10:07.494 "is_configured": true, 00:10:07.494 "data_offset": 0, 00:10:07.494 "data_size": 65536 00:10:07.494 }, 00:10:07.494 { 00:10:07.494 "name": "BaseBdev4", 00:10:07.494 "uuid": "219ce9a0-dff4-4a10-a3d6-120a284ae679", 00:10:07.494 "is_configured": true, 00:10:07.494 "data_offset": 0, 00:10:07.494 "data_size": 65536 00:10:07.494 } 00:10:07.494 ] 00:10:07.494 } 00:10:07.494 } 00:10:07.494 }' 00:10:07.494 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.494 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.494 BaseBdev2 00:10:07.494 BaseBdev3 00:10:07.494 BaseBdev4' 00:10:07.494 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.753 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.754 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.754 [2024-12-12 09:23:41.749457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.754 [2024-12-12 09:23:41.749531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.754 [2024-12-12 09:23:41.749619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.014 "name": "Existed_Raid", 00:10:08.014 "uuid": "756ec856-3b2b-4c1e-8c9b-3581ff676cd0", 00:10:08.014 "strip_size_kb": 64, 00:10:08.014 "state": "offline", 00:10:08.014 "raid_level": "raid0", 00:10:08.014 "superblock": false, 00:10:08.014 "num_base_bdevs": 4, 00:10:08.014 "num_base_bdevs_discovered": 3, 00:10:08.014 "num_base_bdevs_operational": 3, 00:10:08.014 "base_bdevs_list": [ 00:10:08.014 { 00:10:08.014 "name": null, 00:10:08.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.014 "is_configured": false, 00:10:08.014 "data_offset": 0, 00:10:08.014 "data_size": 65536 00:10:08.014 }, 00:10:08.014 { 00:10:08.014 "name": "BaseBdev2", 00:10:08.014 "uuid": "0c39bd65-a19a-4bcc-9ddb-04dc13a64010", 00:10:08.014 "is_configured": true, 00:10:08.014 "data_offset": 0, 00:10:08.014 "data_size": 65536 00:10:08.014 }, 00:10:08.014 { 00:10:08.014 "name": "BaseBdev3", 00:10:08.014 "uuid": "6872a4af-f74e-4d6e-b585-470b29bd75df", 00:10:08.014 "is_configured": true, 00:10:08.014 "data_offset": 0, 00:10:08.014 "data_size": 65536 00:10:08.014 }, 00:10:08.014 { 00:10:08.014 "name": "BaseBdev4", 00:10:08.014 "uuid": "219ce9a0-dff4-4a10-a3d6-120a284ae679", 00:10:08.014 "is_configured": true, 00:10:08.014 "data_offset": 0, 00:10:08.014 "data_size": 65536 00:10:08.014 } 00:10:08.014 ] 00:10:08.014 }' 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.014 09:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.273 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.534 [2024-12-12 09:23:42.316123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.534 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.535 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 [2024-12-12 09:23:42.479244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.795 [2024-12-12 09:23:42.638407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:08.795 [2024-12-12 09:23:42.638517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.795 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.055 BaseBdev2 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.055 [ 00:10:09.055 { 00:10:09.055 "name": "BaseBdev2", 00:10:09.055 "aliases": [ 00:10:09.055 "468662e3-7613-491b-a2ab-1bfb4d7227ea" 00:10:09.055 ], 00:10:09.055 "product_name": "Malloc disk", 00:10:09.055 "block_size": 512, 00:10:09.055 "num_blocks": 65536, 00:10:09.055 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:09.055 "assigned_rate_limits": { 00:10:09.055 "rw_ios_per_sec": 0, 00:10:09.055 "rw_mbytes_per_sec": 0, 00:10:09.055 "r_mbytes_per_sec": 0, 00:10:09.055 "w_mbytes_per_sec": 0 00:10:09.055 }, 00:10:09.055 "claimed": false, 00:10:09.055 "zoned": false, 00:10:09.055 "supported_io_types": { 00:10:09.055 "read": true, 00:10:09.055 "write": true, 00:10:09.055 "unmap": true, 00:10:09.055 "flush": true, 00:10:09.055 "reset": true, 00:10:09.055 "nvme_admin": false, 00:10:09.055 "nvme_io": false, 00:10:09.055 "nvme_io_md": false, 00:10:09.055 "write_zeroes": true, 00:10:09.055 "zcopy": true, 00:10:09.055 "get_zone_info": false, 00:10:09.055 "zone_management": false, 00:10:09.055 "zone_append": false, 00:10:09.055 "compare": false, 00:10:09.055 "compare_and_write": false, 00:10:09.055 "abort": true, 00:10:09.055 "seek_hole": false, 00:10:09.055 "seek_data": false, 00:10:09.055 "copy": true, 00:10:09.055 "nvme_iov_md": false 00:10:09.055 }, 00:10:09.055 "memory_domains": [ 00:10:09.055 { 00:10:09.055 "dma_device_id": "system", 00:10:09.055 "dma_device_type": 1 00:10:09.055 }, 00:10:09.055 { 00:10:09.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.055 "dma_device_type": 2 00:10:09.055 } 00:10:09.055 ], 00:10:09.055 "driver_specific": {} 00:10:09.055 } 00:10:09.055 ] 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.055 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 BaseBdev3 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 [ 00:10:09.056 { 00:10:09.056 "name": "BaseBdev3", 00:10:09.056 "aliases": [ 00:10:09.056 "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b" 00:10:09.056 ], 00:10:09.056 "product_name": "Malloc disk", 00:10:09.056 "block_size": 512, 00:10:09.056 "num_blocks": 65536, 00:10:09.056 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:09.056 "assigned_rate_limits": { 00:10:09.056 "rw_ios_per_sec": 0, 00:10:09.056 "rw_mbytes_per_sec": 0, 00:10:09.056 "r_mbytes_per_sec": 0, 00:10:09.056 "w_mbytes_per_sec": 0 00:10:09.056 }, 00:10:09.056 "claimed": false, 00:10:09.056 "zoned": false, 00:10:09.056 "supported_io_types": { 00:10:09.056 "read": true, 00:10:09.056 "write": true, 00:10:09.056 "unmap": true, 00:10:09.056 "flush": true, 00:10:09.056 "reset": true, 00:10:09.056 "nvme_admin": false, 00:10:09.056 "nvme_io": false, 00:10:09.056 "nvme_io_md": false, 00:10:09.056 "write_zeroes": true, 00:10:09.056 "zcopy": true, 00:10:09.056 "get_zone_info": false, 00:10:09.056 "zone_management": false, 00:10:09.056 "zone_append": false, 00:10:09.056 "compare": false, 00:10:09.056 "compare_and_write": false, 00:10:09.056 "abort": true, 00:10:09.056 "seek_hole": false, 00:10:09.056 "seek_data": false, 00:10:09.056 "copy": true, 00:10:09.056 "nvme_iov_md": false 00:10:09.056 }, 00:10:09.056 "memory_domains": [ 00:10:09.056 { 00:10:09.056 "dma_device_id": "system", 00:10:09.056 "dma_device_type": 1 00:10:09.056 }, 00:10:09.056 { 00:10:09.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.056 "dma_device_type": 2 00:10:09.056 } 00:10:09.056 ], 00:10:09.056 "driver_specific": {} 00:10:09.056 } 00:10:09.056 ] 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 BaseBdev4 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 [ 00:10:09.056 { 00:10:09.056 "name": "BaseBdev4", 00:10:09.056 "aliases": [ 00:10:09.056 "432f49ed-bd58-404b-9349-59df49edc935" 00:10:09.056 ], 00:10:09.056 "product_name": "Malloc disk", 00:10:09.056 "block_size": 512, 00:10:09.056 "num_blocks": 65536, 00:10:09.056 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:09.056 "assigned_rate_limits": { 00:10:09.056 "rw_ios_per_sec": 0, 00:10:09.056 "rw_mbytes_per_sec": 0, 00:10:09.056 "r_mbytes_per_sec": 0, 00:10:09.056 "w_mbytes_per_sec": 0 00:10:09.056 }, 00:10:09.056 "claimed": false, 00:10:09.056 "zoned": false, 00:10:09.056 "supported_io_types": { 00:10:09.056 "read": true, 00:10:09.056 "write": true, 00:10:09.056 "unmap": true, 00:10:09.056 "flush": true, 00:10:09.056 "reset": true, 00:10:09.056 "nvme_admin": false, 00:10:09.056 "nvme_io": false, 00:10:09.056 "nvme_io_md": false, 00:10:09.056 "write_zeroes": true, 00:10:09.056 "zcopy": true, 00:10:09.056 "get_zone_info": false, 00:10:09.056 "zone_management": false, 00:10:09.056 "zone_append": false, 00:10:09.056 "compare": false, 00:10:09.056 "compare_and_write": false, 00:10:09.056 "abort": true, 00:10:09.056 "seek_hole": false, 00:10:09.056 "seek_data": false, 00:10:09.056 "copy": true, 00:10:09.056 "nvme_iov_md": false 00:10:09.056 }, 00:10:09.056 "memory_domains": [ 00:10:09.056 { 00:10:09.056 "dma_device_id": "system", 00:10:09.056 "dma_device_type": 1 00:10:09.056 }, 00:10:09.056 { 00:10:09.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.056 "dma_device_type": 2 00:10:09.056 } 00:10:09.056 ], 00:10:09.056 "driver_specific": {} 00:10:09.056 } 00:10:09.056 ] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 [2024-12-12 09:23:43.051256] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.056 [2024-12-12 09:23:43.051377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.056 [2024-12-12 09:23:43.051431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.056 [2024-12-12 09:23:43.053745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.056 [2024-12-12 09:23:43.053845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.316 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.316 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.316 "name": "Existed_Raid", 00:10:09.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.316 "strip_size_kb": 64, 00:10:09.316 "state": "configuring", 00:10:09.316 "raid_level": "raid0", 00:10:09.316 "superblock": false, 00:10:09.316 "num_base_bdevs": 4, 00:10:09.316 "num_base_bdevs_discovered": 3, 00:10:09.316 "num_base_bdevs_operational": 4, 00:10:09.316 "base_bdevs_list": [ 00:10:09.316 { 00:10:09.316 "name": "BaseBdev1", 00:10:09.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.316 "is_configured": false, 00:10:09.316 "data_offset": 0, 00:10:09.316 "data_size": 0 00:10:09.316 }, 00:10:09.316 { 00:10:09.316 "name": "BaseBdev2", 00:10:09.316 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:09.316 "is_configured": true, 00:10:09.316 "data_offset": 0, 00:10:09.316 "data_size": 65536 00:10:09.316 }, 00:10:09.316 { 00:10:09.316 "name": "BaseBdev3", 00:10:09.316 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:09.316 "is_configured": true, 00:10:09.316 "data_offset": 0, 00:10:09.316 "data_size": 65536 00:10:09.316 }, 00:10:09.316 { 00:10:09.316 "name": "BaseBdev4", 00:10:09.316 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:09.316 "is_configured": true, 00:10:09.316 "data_offset": 0, 00:10:09.316 "data_size": 65536 00:10:09.316 } 00:10:09.316 ] 00:10:09.316 }' 00:10:09.316 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.316 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.576 [2024-12-12 09:23:43.526434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.576 "name": "Existed_Raid", 00:10:09.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.576 "strip_size_kb": 64, 00:10:09.576 "state": "configuring", 00:10:09.576 "raid_level": "raid0", 00:10:09.576 "superblock": false, 00:10:09.576 "num_base_bdevs": 4, 00:10:09.576 "num_base_bdevs_discovered": 2, 00:10:09.576 "num_base_bdevs_operational": 4, 00:10:09.576 "base_bdevs_list": [ 00:10:09.576 { 00:10:09.576 "name": "BaseBdev1", 00:10:09.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.576 "is_configured": false, 00:10:09.576 "data_offset": 0, 00:10:09.576 "data_size": 0 00:10:09.576 }, 00:10:09.576 { 00:10:09.576 "name": null, 00:10:09.576 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:09.576 "is_configured": false, 00:10:09.576 "data_offset": 0, 00:10:09.576 "data_size": 65536 00:10:09.576 }, 00:10:09.576 { 00:10:09.576 "name": "BaseBdev3", 00:10:09.576 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:09.576 "is_configured": true, 00:10:09.576 "data_offset": 0, 00:10:09.576 "data_size": 65536 00:10:09.576 }, 00:10:09.576 { 00:10:09.576 "name": "BaseBdev4", 00:10:09.576 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:09.576 "is_configured": true, 00:10:09.576 "data_offset": 0, 00:10:09.576 "data_size": 65536 00:10:09.576 } 00:10:09.576 ] 00:10:09.576 }' 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.576 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.144 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.144 09:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.144 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.144 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.144 09:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.144 [2024-12-12 09:23:44.071630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.144 BaseBdev1 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.144 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.145 [ 00:10:10.145 { 00:10:10.145 "name": "BaseBdev1", 00:10:10.145 "aliases": [ 00:10:10.145 "28e119c2-67cf-4128-a3f8-6ab716e7e671" 00:10:10.145 ], 00:10:10.145 "product_name": "Malloc disk", 00:10:10.145 "block_size": 512, 00:10:10.145 "num_blocks": 65536, 00:10:10.145 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:10.145 "assigned_rate_limits": { 00:10:10.145 "rw_ios_per_sec": 0, 00:10:10.145 "rw_mbytes_per_sec": 0, 00:10:10.145 "r_mbytes_per_sec": 0, 00:10:10.145 "w_mbytes_per_sec": 0 00:10:10.145 }, 00:10:10.145 "claimed": true, 00:10:10.145 "claim_type": "exclusive_write", 00:10:10.145 "zoned": false, 00:10:10.145 "supported_io_types": { 00:10:10.145 "read": true, 00:10:10.145 "write": true, 00:10:10.145 "unmap": true, 00:10:10.145 "flush": true, 00:10:10.145 "reset": true, 00:10:10.145 "nvme_admin": false, 00:10:10.145 "nvme_io": false, 00:10:10.145 "nvme_io_md": false, 00:10:10.145 "write_zeroes": true, 00:10:10.145 "zcopy": true, 00:10:10.145 "get_zone_info": false, 00:10:10.145 "zone_management": false, 00:10:10.145 "zone_append": false, 00:10:10.145 "compare": false, 00:10:10.145 "compare_and_write": false, 00:10:10.145 "abort": true, 00:10:10.145 "seek_hole": false, 00:10:10.145 "seek_data": false, 00:10:10.145 "copy": true, 00:10:10.145 "nvme_iov_md": false 00:10:10.145 }, 00:10:10.145 "memory_domains": [ 00:10:10.145 { 00:10:10.145 "dma_device_id": "system", 00:10:10.145 "dma_device_type": 1 00:10:10.145 }, 00:10:10.145 { 00:10:10.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.145 "dma_device_type": 2 00:10:10.145 } 00:10:10.145 ], 00:10:10.145 "driver_specific": {} 00:10:10.145 } 00:10:10.145 ] 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.145 "name": "Existed_Raid", 00:10:10.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.145 "strip_size_kb": 64, 00:10:10.145 "state": "configuring", 00:10:10.145 "raid_level": "raid0", 00:10:10.145 "superblock": false, 00:10:10.145 "num_base_bdevs": 4, 00:10:10.145 "num_base_bdevs_discovered": 3, 00:10:10.145 "num_base_bdevs_operational": 4, 00:10:10.145 "base_bdevs_list": [ 00:10:10.145 { 00:10:10.145 "name": "BaseBdev1", 00:10:10.145 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:10.145 "is_configured": true, 00:10:10.145 "data_offset": 0, 00:10:10.145 "data_size": 65536 00:10:10.145 }, 00:10:10.145 { 00:10:10.145 "name": null, 00:10:10.145 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:10.145 "is_configured": false, 00:10:10.145 "data_offset": 0, 00:10:10.145 "data_size": 65536 00:10:10.145 }, 00:10:10.145 { 00:10:10.145 "name": "BaseBdev3", 00:10:10.145 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:10.145 "is_configured": true, 00:10:10.145 "data_offset": 0, 00:10:10.145 "data_size": 65536 00:10:10.145 }, 00:10:10.145 { 00:10:10.145 "name": "BaseBdev4", 00:10:10.145 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:10.145 "is_configured": true, 00:10:10.145 "data_offset": 0, 00:10:10.145 "data_size": 65536 00:10:10.145 } 00:10:10.145 ] 00:10:10.145 }' 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.145 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 [2024-12-12 09:23:44.618779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.713 "name": "Existed_Raid", 00:10:10.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.713 "strip_size_kb": 64, 00:10:10.713 "state": "configuring", 00:10:10.713 "raid_level": "raid0", 00:10:10.713 "superblock": false, 00:10:10.713 "num_base_bdevs": 4, 00:10:10.713 "num_base_bdevs_discovered": 2, 00:10:10.713 "num_base_bdevs_operational": 4, 00:10:10.713 "base_bdevs_list": [ 00:10:10.713 { 00:10:10.713 "name": "BaseBdev1", 00:10:10.713 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:10.713 "is_configured": true, 00:10:10.713 "data_offset": 0, 00:10:10.713 "data_size": 65536 00:10:10.713 }, 00:10:10.713 { 00:10:10.713 "name": null, 00:10:10.713 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:10.713 "is_configured": false, 00:10:10.713 "data_offset": 0, 00:10:10.713 "data_size": 65536 00:10:10.713 }, 00:10:10.713 { 00:10:10.713 "name": null, 00:10:10.713 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:10.713 "is_configured": false, 00:10:10.713 "data_offset": 0, 00:10:10.713 "data_size": 65536 00:10:10.713 }, 00:10:10.713 { 00:10:10.713 "name": "BaseBdev4", 00:10:10.713 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:10.713 "is_configured": true, 00:10:10.713 "data_offset": 0, 00:10:10.713 "data_size": 65536 00:10:10.713 } 00:10:10.713 ] 00:10:10.713 }' 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.713 09:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.283 [2024-12-12 09:23:45.113953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.283 "name": "Existed_Raid", 00:10:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.283 "strip_size_kb": 64, 00:10:11.283 "state": "configuring", 00:10:11.283 "raid_level": "raid0", 00:10:11.283 "superblock": false, 00:10:11.283 "num_base_bdevs": 4, 00:10:11.283 "num_base_bdevs_discovered": 3, 00:10:11.283 "num_base_bdevs_operational": 4, 00:10:11.283 "base_bdevs_list": [ 00:10:11.283 { 00:10:11.283 "name": "BaseBdev1", 00:10:11.283 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:11.283 "is_configured": true, 00:10:11.283 "data_offset": 0, 00:10:11.283 "data_size": 65536 00:10:11.283 }, 00:10:11.283 { 00:10:11.283 "name": null, 00:10:11.283 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:11.283 "is_configured": false, 00:10:11.283 "data_offset": 0, 00:10:11.283 "data_size": 65536 00:10:11.283 }, 00:10:11.283 { 00:10:11.283 "name": "BaseBdev3", 00:10:11.283 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:11.283 "is_configured": true, 00:10:11.283 "data_offset": 0, 00:10:11.283 "data_size": 65536 00:10:11.283 }, 00:10:11.283 { 00:10:11.283 "name": "BaseBdev4", 00:10:11.283 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:11.283 "is_configured": true, 00:10:11.283 "data_offset": 0, 00:10:11.283 "data_size": 65536 00:10:11.283 } 00:10:11.283 ] 00:10:11.283 }' 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.283 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.543 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.543 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.543 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.543 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.543 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.801 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.802 [2024-12-12 09:23:45.589190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.802 "name": "Existed_Raid", 00:10:11.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.802 "strip_size_kb": 64, 00:10:11.802 "state": "configuring", 00:10:11.802 "raid_level": "raid0", 00:10:11.802 "superblock": false, 00:10:11.802 "num_base_bdevs": 4, 00:10:11.802 "num_base_bdevs_discovered": 2, 00:10:11.802 "num_base_bdevs_operational": 4, 00:10:11.802 "base_bdevs_list": [ 00:10:11.802 { 00:10:11.802 "name": null, 00:10:11.802 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:11.802 "is_configured": false, 00:10:11.802 "data_offset": 0, 00:10:11.802 "data_size": 65536 00:10:11.802 }, 00:10:11.802 { 00:10:11.802 "name": null, 00:10:11.802 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:11.802 "is_configured": false, 00:10:11.802 "data_offset": 0, 00:10:11.802 "data_size": 65536 00:10:11.802 }, 00:10:11.802 { 00:10:11.802 "name": "BaseBdev3", 00:10:11.802 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:11.802 "is_configured": true, 00:10:11.802 "data_offset": 0, 00:10:11.802 "data_size": 65536 00:10:11.802 }, 00:10:11.802 { 00:10:11.802 "name": "BaseBdev4", 00:10:11.802 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:11.802 "is_configured": true, 00:10:11.802 "data_offset": 0, 00:10:11.802 "data_size": 65536 00:10:11.802 } 00:10:11.802 ] 00:10:11.802 }' 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.802 09:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.061 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.061 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.061 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.061 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.320 [2024-12-12 09:23:46.131220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.320 "name": "Existed_Raid", 00:10:12.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.320 "strip_size_kb": 64, 00:10:12.320 "state": "configuring", 00:10:12.320 "raid_level": "raid0", 00:10:12.320 "superblock": false, 00:10:12.320 "num_base_bdevs": 4, 00:10:12.320 "num_base_bdevs_discovered": 3, 00:10:12.320 "num_base_bdevs_operational": 4, 00:10:12.320 "base_bdevs_list": [ 00:10:12.320 { 00:10:12.320 "name": null, 00:10:12.320 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:12.320 "is_configured": false, 00:10:12.320 "data_offset": 0, 00:10:12.320 "data_size": 65536 00:10:12.320 }, 00:10:12.320 { 00:10:12.320 "name": "BaseBdev2", 00:10:12.320 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:12.320 "is_configured": true, 00:10:12.320 "data_offset": 0, 00:10:12.320 "data_size": 65536 00:10:12.320 }, 00:10:12.320 { 00:10:12.320 "name": "BaseBdev3", 00:10:12.320 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:12.320 "is_configured": true, 00:10:12.320 "data_offset": 0, 00:10:12.320 "data_size": 65536 00:10:12.320 }, 00:10:12.320 { 00:10:12.320 "name": "BaseBdev4", 00:10:12.320 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:12.320 "is_configured": true, 00:10:12.320 "data_offset": 0, 00:10:12.320 "data_size": 65536 00:10:12.320 } 00:10:12.320 ] 00:10:12.320 }' 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.320 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.580 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.580 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.580 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.580 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.580 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28e119c2-67cf-4128-a3f8-6ab716e7e671 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.840 [2024-12-12 09:23:46.719464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.840 [2024-12-12 09:23:46.719593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.840 [2024-12-12 09:23:46.719626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:12.840 [2024-12-12 09:23:46.719970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:12.840 [2024-12-12 09:23:46.720206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.840 [2024-12-12 09:23:46.720249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:12.840 [2024-12-12 09:23:46.720555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.840 NewBaseBdev 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.840 [ 00:10:12.840 { 00:10:12.840 "name": "NewBaseBdev", 00:10:12.840 "aliases": [ 00:10:12.840 "28e119c2-67cf-4128-a3f8-6ab716e7e671" 00:10:12.840 ], 00:10:12.840 "product_name": "Malloc disk", 00:10:12.840 "block_size": 512, 00:10:12.840 "num_blocks": 65536, 00:10:12.840 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:12.840 "assigned_rate_limits": { 00:10:12.840 "rw_ios_per_sec": 0, 00:10:12.840 "rw_mbytes_per_sec": 0, 00:10:12.840 "r_mbytes_per_sec": 0, 00:10:12.840 "w_mbytes_per_sec": 0 00:10:12.840 }, 00:10:12.840 "claimed": true, 00:10:12.840 "claim_type": "exclusive_write", 00:10:12.840 "zoned": false, 00:10:12.840 "supported_io_types": { 00:10:12.840 "read": true, 00:10:12.840 "write": true, 00:10:12.840 "unmap": true, 00:10:12.840 "flush": true, 00:10:12.840 "reset": true, 00:10:12.840 "nvme_admin": false, 00:10:12.840 "nvme_io": false, 00:10:12.840 "nvme_io_md": false, 00:10:12.840 "write_zeroes": true, 00:10:12.840 "zcopy": true, 00:10:12.840 "get_zone_info": false, 00:10:12.840 "zone_management": false, 00:10:12.840 "zone_append": false, 00:10:12.840 "compare": false, 00:10:12.840 "compare_and_write": false, 00:10:12.840 "abort": true, 00:10:12.840 "seek_hole": false, 00:10:12.840 "seek_data": false, 00:10:12.840 "copy": true, 00:10:12.840 "nvme_iov_md": false 00:10:12.840 }, 00:10:12.840 "memory_domains": [ 00:10:12.840 { 00:10:12.840 "dma_device_id": "system", 00:10:12.840 "dma_device_type": 1 00:10:12.840 }, 00:10:12.840 { 00:10:12.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.840 "dma_device_type": 2 00:10:12.840 } 00:10:12.840 ], 00:10:12.840 "driver_specific": {} 00:10:12.840 } 00:10:12.840 ] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.840 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.840 "name": "Existed_Raid", 00:10:12.840 "uuid": "a0f553d0-8daa-4abe-9b5c-5dae92d61295", 00:10:12.840 "strip_size_kb": 64, 00:10:12.840 "state": "online", 00:10:12.840 "raid_level": "raid0", 00:10:12.841 "superblock": false, 00:10:12.841 "num_base_bdevs": 4, 00:10:12.841 "num_base_bdevs_discovered": 4, 00:10:12.841 "num_base_bdevs_operational": 4, 00:10:12.841 "base_bdevs_list": [ 00:10:12.841 { 00:10:12.841 "name": "NewBaseBdev", 00:10:12.841 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:12.841 "is_configured": true, 00:10:12.841 "data_offset": 0, 00:10:12.841 "data_size": 65536 00:10:12.841 }, 00:10:12.841 { 00:10:12.841 "name": "BaseBdev2", 00:10:12.841 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:12.841 "is_configured": true, 00:10:12.841 "data_offset": 0, 00:10:12.841 "data_size": 65536 00:10:12.841 }, 00:10:12.841 { 00:10:12.841 "name": "BaseBdev3", 00:10:12.841 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:12.841 "is_configured": true, 00:10:12.841 "data_offset": 0, 00:10:12.841 "data_size": 65536 00:10:12.841 }, 00:10:12.841 { 00:10:12.841 "name": "BaseBdev4", 00:10:12.841 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:12.841 "is_configured": true, 00:10:12.841 "data_offset": 0, 00:10:12.841 "data_size": 65536 00:10:12.841 } 00:10:12.841 ] 00:10:12.841 }' 00:10:12.841 09:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.841 09:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.410 [2024-12-12 09:23:47.235009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.410 "name": "Existed_Raid", 00:10:13.410 "aliases": [ 00:10:13.410 "a0f553d0-8daa-4abe-9b5c-5dae92d61295" 00:10:13.410 ], 00:10:13.410 "product_name": "Raid Volume", 00:10:13.410 "block_size": 512, 00:10:13.410 "num_blocks": 262144, 00:10:13.410 "uuid": "a0f553d0-8daa-4abe-9b5c-5dae92d61295", 00:10:13.410 "assigned_rate_limits": { 00:10:13.410 "rw_ios_per_sec": 0, 00:10:13.410 "rw_mbytes_per_sec": 0, 00:10:13.410 "r_mbytes_per_sec": 0, 00:10:13.410 "w_mbytes_per_sec": 0 00:10:13.410 }, 00:10:13.410 "claimed": false, 00:10:13.410 "zoned": false, 00:10:13.410 "supported_io_types": { 00:10:13.410 "read": true, 00:10:13.410 "write": true, 00:10:13.410 "unmap": true, 00:10:13.410 "flush": true, 00:10:13.410 "reset": true, 00:10:13.410 "nvme_admin": false, 00:10:13.410 "nvme_io": false, 00:10:13.410 "nvme_io_md": false, 00:10:13.410 "write_zeroes": true, 00:10:13.410 "zcopy": false, 00:10:13.410 "get_zone_info": false, 00:10:13.410 "zone_management": false, 00:10:13.410 "zone_append": false, 00:10:13.410 "compare": false, 00:10:13.410 "compare_and_write": false, 00:10:13.410 "abort": false, 00:10:13.410 "seek_hole": false, 00:10:13.410 "seek_data": false, 00:10:13.410 "copy": false, 00:10:13.410 "nvme_iov_md": false 00:10:13.410 }, 00:10:13.410 "memory_domains": [ 00:10:13.410 { 00:10:13.410 "dma_device_id": "system", 00:10:13.410 "dma_device_type": 1 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.410 "dma_device_type": 2 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "system", 00:10:13.410 "dma_device_type": 1 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.410 "dma_device_type": 2 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "system", 00:10:13.410 "dma_device_type": 1 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.410 "dma_device_type": 2 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "system", 00:10:13.410 "dma_device_type": 1 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.410 "dma_device_type": 2 00:10:13.410 } 00:10:13.410 ], 00:10:13.410 "driver_specific": { 00:10:13.410 "raid": { 00:10:13.410 "uuid": "a0f553d0-8daa-4abe-9b5c-5dae92d61295", 00:10:13.410 "strip_size_kb": 64, 00:10:13.410 "state": "online", 00:10:13.410 "raid_level": "raid0", 00:10:13.410 "superblock": false, 00:10:13.410 "num_base_bdevs": 4, 00:10:13.410 "num_base_bdevs_discovered": 4, 00:10:13.410 "num_base_bdevs_operational": 4, 00:10:13.410 "base_bdevs_list": [ 00:10:13.410 { 00:10:13.410 "name": "NewBaseBdev", 00:10:13.410 "uuid": "28e119c2-67cf-4128-a3f8-6ab716e7e671", 00:10:13.410 "is_configured": true, 00:10:13.410 "data_offset": 0, 00:10:13.410 "data_size": 65536 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "name": "BaseBdev2", 00:10:13.410 "uuid": "468662e3-7613-491b-a2ab-1bfb4d7227ea", 00:10:13.410 "is_configured": true, 00:10:13.410 "data_offset": 0, 00:10:13.410 "data_size": 65536 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "name": "BaseBdev3", 00:10:13.410 "uuid": "cffd27c2-1bd8-4fa1-af60-55f5ab255d8b", 00:10:13.410 "is_configured": true, 00:10:13.410 "data_offset": 0, 00:10:13.410 "data_size": 65536 00:10:13.410 }, 00:10:13.410 { 00:10:13.410 "name": "BaseBdev4", 00:10:13.410 "uuid": "432f49ed-bd58-404b-9349-59df49edc935", 00:10:13.410 "is_configured": true, 00:10:13.410 "data_offset": 0, 00:10:13.410 "data_size": 65536 00:10:13.410 } 00:10:13.410 ] 00:10:13.410 } 00:10:13.410 } 00:10:13.410 }' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.410 BaseBdev2 00:10:13.410 BaseBdev3 00:10:13.410 BaseBdev4' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.410 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.671 [2024-12-12 09:23:47.562056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.671 [2024-12-12 09:23:47.562086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.671 [2024-12-12 09:23:47.562160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.671 [2024-12-12 09:23:47.562232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.671 [2024-12-12 09:23:47.562242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70516 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 70516 ']' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 70516 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70516 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70516' 00:10:13.671 killing process with pid 70516 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 70516 00:10:13.671 [2024-12-12 09:23:47.610450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.671 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 70516 00:10:14.242 [2024-12-12 09:23:48.031894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.622 00:10:15.622 real 0m11.743s 00:10:15.622 user 0m18.393s 00:10:15.622 sys 0m2.232s 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.622 ************************************ 00:10:15.622 END TEST raid_state_function_test 00:10:15.622 ************************************ 00:10:15.622 09:23:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:15.622 09:23:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.622 09:23:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.622 09:23:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.622 ************************************ 00:10:15.622 START TEST raid_state_function_test_sb 00:10:15.622 ************************************ 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:15.622 Process raid pid: 71192 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71192 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71192' 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71192 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71192 ']' 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.622 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.623 09:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 [2024-12-12 09:23:49.402582] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:15.623 [2024-12-12 09:23:49.402761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.623 [2024-12-12 09:23:49.578249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.882 [2024-12-12 09:23:49.711396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.152 [2024-12-12 09:23:49.956013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.152 [2024-12-12 09:23:49.956152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.428 [2024-12-12 09:23:50.222662] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.428 [2024-12-12 09:23:50.222773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.428 [2024-12-12 09:23:50.222804] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.428 [2024-12-12 09:23:50.222827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.428 [2024-12-12 09:23:50.222846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.428 [2024-12-12 09:23:50.222867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.428 [2024-12-12 09:23:50.222885] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:16.428 [2024-12-12 09:23:50.222907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.428 "name": "Existed_Raid", 00:10:16.428 "uuid": "8390d38b-bad5-4950-8aa0-7b32ada6b8ea", 00:10:16.428 "strip_size_kb": 64, 00:10:16.428 "state": "configuring", 00:10:16.428 "raid_level": "raid0", 00:10:16.428 "superblock": true, 00:10:16.428 "num_base_bdevs": 4, 00:10:16.428 "num_base_bdevs_discovered": 0, 00:10:16.428 "num_base_bdevs_operational": 4, 00:10:16.428 "base_bdevs_list": [ 00:10:16.428 { 00:10:16.428 "name": "BaseBdev1", 00:10:16.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.428 "is_configured": false, 00:10:16.428 "data_offset": 0, 00:10:16.428 "data_size": 0 00:10:16.428 }, 00:10:16.428 { 00:10:16.428 "name": "BaseBdev2", 00:10:16.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.428 "is_configured": false, 00:10:16.428 "data_offset": 0, 00:10:16.428 "data_size": 0 00:10:16.428 }, 00:10:16.428 { 00:10:16.428 "name": "BaseBdev3", 00:10:16.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.428 "is_configured": false, 00:10:16.428 "data_offset": 0, 00:10:16.428 "data_size": 0 00:10:16.428 }, 00:10:16.428 { 00:10:16.428 "name": "BaseBdev4", 00:10:16.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.428 "is_configured": false, 00:10:16.428 "data_offset": 0, 00:10:16.428 "data_size": 0 00:10:16.428 } 00:10:16.428 ] 00:10:16.428 }' 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.428 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.688 [2024-12-12 09:23:50.669877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.688 [2024-12-12 09:23:50.670026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.688 [2024-12-12 09:23:50.677846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.688 [2024-12-12 09:23:50.677927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.688 [2024-12-12 09:23:50.677964] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.688 [2024-12-12 09:23:50.677988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.688 [2024-12-12 09:23:50.678006] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.688 [2024-12-12 09:23:50.678027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.688 [2024-12-12 09:23:50.678045] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:16.688 [2024-12-12 09:23:50.678082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.688 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.947 [2024-12-12 09:23:50.727064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.947 BaseBdev1 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.947 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.947 [ 00:10:16.947 { 00:10:16.947 "name": "BaseBdev1", 00:10:16.947 "aliases": [ 00:10:16.947 "ba5c4ca4-7347-4569-b5b5-a175b49aaf75" 00:10:16.947 ], 00:10:16.947 "product_name": "Malloc disk", 00:10:16.947 "block_size": 512, 00:10:16.947 "num_blocks": 65536, 00:10:16.947 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:16.947 "assigned_rate_limits": { 00:10:16.947 "rw_ios_per_sec": 0, 00:10:16.947 "rw_mbytes_per_sec": 0, 00:10:16.947 "r_mbytes_per_sec": 0, 00:10:16.947 "w_mbytes_per_sec": 0 00:10:16.947 }, 00:10:16.947 "claimed": true, 00:10:16.947 "claim_type": "exclusive_write", 00:10:16.947 "zoned": false, 00:10:16.947 "supported_io_types": { 00:10:16.947 "read": true, 00:10:16.947 "write": true, 00:10:16.947 "unmap": true, 00:10:16.947 "flush": true, 00:10:16.947 "reset": true, 00:10:16.948 "nvme_admin": false, 00:10:16.948 "nvme_io": false, 00:10:16.948 "nvme_io_md": false, 00:10:16.948 "write_zeroes": true, 00:10:16.948 "zcopy": true, 00:10:16.948 "get_zone_info": false, 00:10:16.948 "zone_management": false, 00:10:16.948 "zone_append": false, 00:10:16.948 "compare": false, 00:10:16.948 "compare_and_write": false, 00:10:16.948 "abort": true, 00:10:16.948 "seek_hole": false, 00:10:16.948 "seek_data": false, 00:10:16.948 "copy": true, 00:10:16.948 "nvme_iov_md": false 00:10:16.948 }, 00:10:16.948 "memory_domains": [ 00:10:16.948 { 00:10:16.948 "dma_device_id": "system", 00:10:16.948 "dma_device_type": 1 00:10:16.948 }, 00:10:16.948 { 00:10:16.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.948 "dma_device_type": 2 00:10:16.948 } 00:10:16.948 ], 00:10:16.948 "driver_specific": {} 00:10:16.948 } 00:10:16.948 ] 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.948 "name": "Existed_Raid", 00:10:16.948 "uuid": "1e53e591-6ba2-48a2-aa8d-8fdfefe7f901", 00:10:16.948 "strip_size_kb": 64, 00:10:16.948 "state": "configuring", 00:10:16.948 "raid_level": "raid0", 00:10:16.948 "superblock": true, 00:10:16.948 "num_base_bdevs": 4, 00:10:16.948 "num_base_bdevs_discovered": 1, 00:10:16.948 "num_base_bdevs_operational": 4, 00:10:16.948 "base_bdevs_list": [ 00:10:16.948 { 00:10:16.948 "name": "BaseBdev1", 00:10:16.948 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:16.948 "is_configured": true, 00:10:16.948 "data_offset": 2048, 00:10:16.948 "data_size": 63488 00:10:16.948 }, 00:10:16.948 { 00:10:16.948 "name": "BaseBdev2", 00:10:16.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.948 "is_configured": false, 00:10:16.948 "data_offset": 0, 00:10:16.948 "data_size": 0 00:10:16.948 }, 00:10:16.948 { 00:10:16.948 "name": "BaseBdev3", 00:10:16.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.948 "is_configured": false, 00:10:16.948 "data_offset": 0, 00:10:16.948 "data_size": 0 00:10:16.948 }, 00:10:16.948 { 00:10:16.948 "name": "BaseBdev4", 00:10:16.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.948 "is_configured": false, 00:10:16.948 "data_offset": 0, 00:10:16.948 "data_size": 0 00:10:16.948 } 00:10:16.948 ] 00:10:16.948 }' 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.948 09:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.208 [2024-12-12 09:23:51.206278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.208 [2024-12-12 09:23:51.206418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.208 [2024-12-12 09:23:51.218317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.208 [2024-12-12 09:23:51.220521] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.208 [2024-12-12 09:23:51.220567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.208 [2024-12-12 09:23:51.220578] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.208 [2024-12-12 09:23:51.220590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.208 [2024-12-12 09:23:51.220597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.208 [2024-12-12 09:23:51.220607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.208 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.467 "name": "Existed_Raid", 00:10:17.467 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:17.467 "strip_size_kb": 64, 00:10:17.467 "state": "configuring", 00:10:17.467 "raid_level": "raid0", 00:10:17.467 "superblock": true, 00:10:17.467 "num_base_bdevs": 4, 00:10:17.467 "num_base_bdevs_discovered": 1, 00:10:17.467 "num_base_bdevs_operational": 4, 00:10:17.467 "base_bdevs_list": [ 00:10:17.467 { 00:10:17.467 "name": "BaseBdev1", 00:10:17.467 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:17.467 "is_configured": true, 00:10:17.467 "data_offset": 2048, 00:10:17.467 "data_size": 63488 00:10:17.467 }, 00:10:17.467 { 00:10:17.467 "name": "BaseBdev2", 00:10:17.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.467 "is_configured": false, 00:10:17.467 "data_offset": 0, 00:10:17.467 "data_size": 0 00:10:17.467 }, 00:10:17.467 { 00:10:17.467 "name": "BaseBdev3", 00:10:17.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.467 "is_configured": false, 00:10:17.467 "data_offset": 0, 00:10:17.467 "data_size": 0 00:10:17.467 }, 00:10:17.467 { 00:10:17.467 "name": "BaseBdev4", 00:10:17.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.467 "is_configured": false, 00:10:17.467 "data_offset": 0, 00:10:17.467 "data_size": 0 00:10:17.467 } 00:10:17.467 ] 00:10:17.467 }' 00:10:17.467 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.468 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.728 [2024-12-12 09:23:51.653214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.728 BaseBdev2 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.728 [ 00:10:17.728 { 00:10:17.728 "name": "BaseBdev2", 00:10:17.728 "aliases": [ 00:10:17.728 "209fe6b7-70c7-4503-9c7a-f9cb28fc2930" 00:10:17.728 ], 00:10:17.728 "product_name": "Malloc disk", 00:10:17.728 "block_size": 512, 00:10:17.728 "num_blocks": 65536, 00:10:17.728 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:17.728 "assigned_rate_limits": { 00:10:17.728 "rw_ios_per_sec": 0, 00:10:17.728 "rw_mbytes_per_sec": 0, 00:10:17.728 "r_mbytes_per_sec": 0, 00:10:17.728 "w_mbytes_per_sec": 0 00:10:17.728 }, 00:10:17.728 "claimed": true, 00:10:17.728 "claim_type": "exclusive_write", 00:10:17.728 "zoned": false, 00:10:17.728 "supported_io_types": { 00:10:17.728 "read": true, 00:10:17.728 "write": true, 00:10:17.728 "unmap": true, 00:10:17.728 "flush": true, 00:10:17.728 "reset": true, 00:10:17.728 "nvme_admin": false, 00:10:17.728 "nvme_io": false, 00:10:17.728 "nvme_io_md": false, 00:10:17.728 "write_zeroes": true, 00:10:17.728 "zcopy": true, 00:10:17.728 "get_zone_info": false, 00:10:17.728 "zone_management": false, 00:10:17.728 "zone_append": false, 00:10:17.728 "compare": false, 00:10:17.728 "compare_and_write": false, 00:10:17.728 "abort": true, 00:10:17.728 "seek_hole": false, 00:10:17.728 "seek_data": false, 00:10:17.728 "copy": true, 00:10:17.728 "nvme_iov_md": false 00:10:17.728 }, 00:10:17.728 "memory_domains": [ 00:10:17.728 { 00:10:17.728 "dma_device_id": "system", 00:10:17.728 "dma_device_type": 1 00:10:17.728 }, 00:10:17.728 { 00:10:17.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.728 "dma_device_type": 2 00:10:17.728 } 00:10:17.728 ], 00:10:17.728 "driver_specific": {} 00:10:17.728 } 00:10:17.728 ] 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.728 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.729 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.729 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.729 "name": "Existed_Raid", 00:10:17.729 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:17.729 "strip_size_kb": 64, 00:10:17.729 "state": "configuring", 00:10:17.729 "raid_level": "raid0", 00:10:17.729 "superblock": true, 00:10:17.729 "num_base_bdevs": 4, 00:10:17.729 "num_base_bdevs_discovered": 2, 00:10:17.729 "num_base_bdevs_operational": 4, 00:10:17.729 "base_bdevs_list": [ 00:10:17.729 { 00:10:17.729 "name": "BaseBdev1", 00:10:17.729 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:17.729 "is_configured": true, 00:10:17.729 "data_offset": 2048, 00:10:17.729 "data_size": 63488 00:10:17.729 }, 00:10:17.729 { 00:10:17.729 "name": "BaseBdev2", 00:10:17.729 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:17.729 "is_configured": true, 00:10:17.729 "data_offset": 2048, 00:10:17.729 "data_size": 63488 00:10:17.729 }, 00:10:17.729 { 00:10:17.729 "name": "BaseBdev3", 00:10:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.729 "is_configured": false, 00:10:17.729 "data_offset": 0, 00:10:17.729 "data_size": 0 00:10:17.729 }, 00:10:17.729 { 00:10:17.729 "name": "BaseBdev4", 00:10:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.729 "is_configured": false, 00:10:17.729 "data_offset": 0, 00:10:17.729 "data_size": 0 00:10:17.729 } 00:10:17.729 ] 00:10:17.729 }' 00:10:17.729 09:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.729 09:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 [2024-12-12 09:23:52.183942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.297 BaseBdev3 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 [ 00:10:18.297 { 00:10:18.297 "name": "BaseBdev3", 00:10:18.297 "aliases": [ 00:10:18.297 "04584247-ea03-4a48-b257-76bdfbb2e76f" 00:10:18.297 ], 00:10:18.297 "product_name": "Malloc disk", 00:10:18.297 "block_size": 512, 00:10:18.297 "num_blocks": 65536, 00:10:18.297 "uuid": "04584247-ea03-4a48-b257-76bdfbb2e76f", 00:10:18.297 "assigned_rate_limits": { 00:10:18.297 "rw_ios_per_sec": 0, 00:10:18.297 "rw_mbytes_per_sec": 0, 00:10:18.297 "r_mbytes_per_sec": 0, 00:10:18.297 "w_mbytes_per_sec": 0 00:10:18.297 }, 00:10:18.297 "claimed": true, 00:10:18.297 "claim_type": "exclusive_write", 00:10:18.297 "zoned": false, 00:10:18.297 "supported_io_types": { 00:10:18.297 "read": true, 00:10:18.297 "write": true, 00:10:18.297 "unmap": true, 00:10:18.297 "flush": true, 00:10:18.297 "reset": true, 00:10:18.297 "nvme_admin": false, 00:10:18.297 "nvme_io": false, 00:10:18.297 "nvme_io_md": false, 00:10:18.297 "write_zeroes": true, 00:10:18.297 "zcopy": true, 00:10:18.297 "get_zone_info": false, 00:10:18.297 "zone_management": false, 00:10:18.297 "zone_append": false, 00:10:18.297 "compare": false, 00:10:18.297 "compare_and_write": false, 00:10:18.297 "abort": true, 00:10:18.297 "seek_hole": false, 00:10:18.297 "seek_data": false, 00:10:18.297 "copy": true, 00:10:18.297 "nvme_iov_md": false 00:10:18.297 }, 00:10:18.297 "memory_domains": [ 00:10:18.297 { 00:10:18.297 "dma_device_id": "system", 00:10:18.297 "dma_device_type": 1 00:10:18.297 }, 00:10:18.297 { 00:10:18.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.297 "dma_device_type": 2 00:10:18.297 } 00:10:18.297 ], 00:10:18.297 "driver_specific": {} 00:10:18.297 } 00:10:18.297 ] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.297 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.297 "name": "Existed_Raid", 00:10:18.297 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:18.297 "strip_size_kb": 64, 00:10:18.297 "state": "configuring", 00:10:18.298 "raid_level": "raid0", 00:10:18.298 "superblock": true, 00:10:18.298 "num_base_bdevs": 4, 00:10:18.298 "num_base_bdevs_discovered": 3, 00:10:18.298 "num_base_bdevs_operational": 4, 00:10:18.298 "base_bdevs_list": [ 00:10:18.298 { 00:10:18.298 "name": "BaseBdev1", 00:10:18.298 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:18.298 "is_configured": true, 00:10:18.298 "data_offset": 2048, 00:10:18.298 "data_size": 63488 00:10:18.298 }, 00:10:18.298 { 00:10:18.298 "name": "BaseBdev2", 00:10:18.298 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:18.298 "is_configured": true, 00:10:18.298 "data_offset": 2048, 00:10:18.298 "data_size": 63488 00:10:18.298 }, 00:10:18.298 { 00:10:18.298 "name": "BaseBdev3", 00:10:18.298 "uuid": "04584247-ea03-4a48-b257-76bdfbb2e76f", 00:10:18.298 "is_configured": true, 00:10:18.298 "data_offset": 2048, 00:10:18.298 "data_size": 63488 00:10:18.298 }, 00:10:18.298 { 00:10:18.298 "name": "BaseBdev4", 00:10:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.298 "is_configured": false, 00:10:18.298 "data_offset": 0, 00:10:18.298 "data_size": 0 00:10:18.298 } 00:10:18.298 ] 00:10:18.298 }' 00:10:18.298 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.298 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 [2024-12-12 09:23:52.691738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.866 [2024-12-12 09:23:52.692097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.866 [2024-12-12 09:23:52.692116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.866 BaseBdev4 00:10:18.866 [2024-12-12 09:23:52.692431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.866 [2024-12-12 09:23:52.692611] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.866 [2024-12-12 09:23:52.692624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:18.866 [2024-12-12 09:23:52.692767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.867 [ 00:10:18.867 { 00:10:18.867 "name": "BaseBdev4", 00:10:18.867 "aliases": [ 00:10:18.867 "8c5a50f0-5208-42fb-99ef-53d3b0f1254a" 00:10:18.867 ], 00:10:18.867 "product_name": "Malloc disk", 00:10:18.867 "block_size": 512, 00:10:18.867 "num_blocks": 65536, 00:10:18.867 "uuid": "8c5a50f0-5208-42fb-99ef-53d3b0f1254a", 00:10:18.867 "assigned_rate_limits": { 00:10:18.867 "rw_ios_per_sec": 0, 00:10:18.867 "rw_mbytes_per_sec": 0, 00:10:18.867 "r_mbytes_per_sec": 0, 00:10:18.867 "w_mbytes_per_sec": 0 00:10:18.867 }, 00:10:18.867 "claimed": true, 00:10:18.867 "claim_type": "exclusive_write", 00:10:18.867 "zoned": false, 00:10:18.867 "supported_io_types": { 00:10:18.867 "read": true, 00:10:18.867 "write": true, 00:10:18.867 "unmap": true, 00:10:18.867 "flush": true, 00:10:18.867 "reset": true, 00:10:18.867 "nvme_admin": false, 00:10:18.867 "nvme_io": false, 00:10:18.867 "nvme_io_md": false, 00:10:18.867 "write_zeroes": true, 00:10:18.867 "zcopy": true, 00:10:18.867 "get_zone_info": false, 00:10:18.867 "zone_management": false, 00:10:18.867 "zone_append": false, 00:10:18.867 "compare": false, 00:10:18.867 "compare_and_write": false, 00:10:18.867 "abort": true, 00:10:18.867 "seek_hole": false, 00:10:18.867 "seek_data": false, 00:10:18.867 "copy": true, 00:10:18.867 "nvme_iov_md": false 00:10:18.867 }, 00:10:18.867 "memory_domains": [ 00:10:18.867 { 00:10:18.867 "dma_device_id": "system", 00:10:18.867 "dma_device_type": 1 00:10:18.867 }, 00:10:18.867 { 00:10:18.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.867 "dma_device_type": 2 00:10:18.867 } 00:10:18.867 ], 00:10:18.867 "driver_specific": {} 00:10:18.867 } 00:10:18.867 ] 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.867 "name": "Existed_Raid", 00:10:18.867 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:18.867 "strip_size_kb": 64, 00:10:18.867 "state": "online", 00:10:18.867 "raid_level": "raid0", 00:10:18.867 "superblock": true, 00:10:18.867 "num_base_bdevs": 4, 00:10:18.867 "num_base_bdevs_discovered": 4, 00:10:18.867 "num_base_bdevs_operational": 4, 00:10:18.867 "base_bdevs_list": [ 00:10:18.867 { 00:10:18.867 "name": "BaseBdev1", 00:10:18.867 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:18.867 "is_configured": true, 00:10:18.867 "data_offset": 2048, 00:10:18.867 "data_size": 63488 00:10:18.867 }, 00:10:18.867 { 00:10:18.867 "name": "BaseBdev2", 00:10:18.867 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:18.867 "is_configured": true, 00:10:18.867 "data_offset": 2048, 00:10:18.867 "data_size": 63488 00:10:18.867 }, 00:10:18.867 { 00:10:18.867 "name": "BaseBdev3", 00:10:18.867 "uuid": "04584247-ea03-4a48-b257-76bdfbb2e76f", 00:10:18.867 "is_configured": true, 00:10:18.867 "data_offset": 2048, 00:10:18.867 "data_size": 63488 00:10:18.867 }, 00:10:18.867 { 00:10:18.867 "name": "BaseBdev4", 00:10:18.867 "uuid": "8c5a50f0-5208-42fb-99ef-53d3b0f1254a", 00:10:18.867 "is_configured": true, 00:10:18.867 "data_offset": 2048, 00:10:18.867 "data_size": 63488 00:10:18.867 } 00:10:18.867 ] 00:10:18.867 }' 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.867 09:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.127 [2024-12-12 09:23:53.119359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.127 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.386 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.386 "name": "Existed_Raid", 00:10:19.386 "aliases": [ 00:10:19.386 "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13" 00:10:19.386 ], 00:10:19.386 "product_name": "Raid Volume", 00:10:19.386 "block_size": 512, 00:10:19.386 "num_blocks": 253952, 00:10:19.386 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:19.387 "assigned_rate_limits": { 00:10:19.387 "rw_ios_per_sec": 0, 00:10:19.387 "rw_mbytes_per_sec": 0, 00:10:19.387 "r_mbytes_per_sec": 0, 00:10:19.387 "w_mbytes_per_sec": 0 00:10:19.387 }, 00:10:19.387 "claimed": false, 00:10:19.387 "zoned": false, 00:10:19.387 "supported_io_types": { 00:10:19.387 "read": true, 00:10:19.387 "write": true, 00:10:19.387 "unmap": true, 00:10:19.387 "flush": true, 00:10:19.387 "reset": true, 00:10:19.387 "nvme_admin": false, 00:10:19.387 "nvme_io": false, 00:10:19.387 "nvme_io_md": false, 00:10:19.387 "write_zeroes": true, 00:10:19.387 "zcopy": false, 00:10:19.387 "get_zone_info": false, 00:10:19.387 "zone_management": false, 00:10:19.387 "zone_append": false, 00:10:19.387 "compare": false, 00:10:19.387 "compare_and_write": false, 00:10:19.387 "abort": false, 00:10:19.387 "seek_hole": false, 00:10:19.387 "seek_data": false, 00:10:19.387 "copy": false, 00:10:19.387 "nvme_iov_md": false 00:10:19.387 }, 00:10:19.387 "memory_domains": [ 00:10:19.387 { 00:10:19.387 "dma_device_id": "system", 00:10:19.387 "dma_device_type": 1 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.387 "dma_device_type": 2 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "system", 00:10:19.387 "dma_device_type": 1 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.387 "dma_device_type": 2 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "system", 00:10:19.387 "dma_device_type": 1 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.387 "dma_device_type": 2 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "system", 00:10:19.387 "dma_device_type": 1 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.387 "dma_device_type": 2 00:10:19.387 } 00:10:19.387 ], 00:10:19.387 "driver_specific": { 00:10:19.387 "raid": { 00:10:19.387 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:19.387 "strip_size_kb": 64, 00:10:19.387 "state": "online", 00:10:19.387 "raid_level": "raid0", 00:10:19.387 "superblock": true, 00:10:19.387 "num_base_bdevs": 4, 00:10:19.387 "num_base_bdevs_discovered": 4, 00:10:19.387 "num_base_bdevs_operational": 4, 00:10:19.387 "base_bdevs_list": [ 00:10:19.387 { 00:10:19.387 "name": "BaseBdev1", 00:10:19.387 "uuid": "ba5c4ca4-7347-4569-b5b5-a175b49aaf75", 00:10:19.387 "is_configured": true, 00:10:19.387 "data_offset": 2048, 00:10:19.387 "data_size": 63488 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "name": "BaseBdev2", 00:10:19.387 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:19.387 "is_configured": true, 00:10:19.387 "data_offset": 2048, 00:10:19.387 "data_size": 63488 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "name": "BaseBdev3", 00:10:19.387 "uuid": "04584247-ea03-4a48-b257-76bdfbb2e76f", 00:10:19.387 "is_configured": true, 00:10:19.387 "data_offset": 2048, 00:10:19.387 "data_size": 63488 00:10:19.387 }, 00:10:19.387 { 00:10:19.387 "name": "BaseBdev4", 00:10:19.387 "uuid": "8c5a50f0-5208-42fb-99ef-53d3b0f1254a", 00:10:19.387 "is_configured": true, 00:10:19.387 "data_offset": 2048, 00:10:19.387 "data_size": 63488 00:10:19.387 } 00:10:19.387 ] 00:10:19.387 } 00:10:19.387 } 00:10:19.387 }' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:19.387 BaseBdev2 00:10:19.387 BaseBdev3 00:10:19.387 BaseBdev4' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.387 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.646 [2024-12-12 09:23:53.414604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.646 [2024-12-12 09:23:53.414637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.646 [2024-12-12 09:23:53.414690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.646 "name": "Existed_Raid", 00:10:19.646 "uuid": "a5b5fe2b-8a8c-445e-b2f8-4f745ab21e13", 00:10:19.646 "strip_size_kb": 64, 00:10:19.646 "state": "offline", 00:10:19.646 "raid_level": "raid0", 00:10:19.646 "superblock": true, 00:10:19.646 "num_base_bdevs": 4, 00:10:19.646 "num_base_bdevs_discovered": 3, 00:10:19.646 "num_base_bdevs_operational": 3, 00:10:19.646 "base_bdevs_list": [ 00:10:19.646 { 00:10:19.646 "name": null, 00:10:19.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.646 "is_configured": false, 00:10:19.646 "data_offset": 0, 00:10:19.646 "data_size": 63488 00:10:19.646 }, 00:10:19.646 { 00:10:19.646 "name": "BaseBdev2", 00:10:19.646 "uuid": "209fe6b7-70c7-4503-9c7a-f9cb28fc2930", 00:10:19.646 "is_configured": true, 00:10:19.646 "data_offset": 2048, 00:10:19.646 "data_size": 63488 00:10:19.646 }, 00:10:19.646 { 00:10:19.646 "name": "BaseBdev3", 00:10:19.646 "uuid": "04584247-ea03-4a48-b257-76bdfbb2e76f", 00:10:19.646 "is_configured": true, 00:10:19.646 "data_offset": 2048, 00:10:19.646 "data_size": 63488 00:10:19.646 }, 00:10:19.646 { 00:10:19.646 "name": "BaseBdev4", 00:10:19.646 "uuid": "8c5a50f0-5208-42fb-99ef-53d3b0f1254a", 00:10:19.646 "is_configured": true, 00:10:19.646 "data_offset": 2048, 00:10:19.646 "data_size": 63488 00:10:19.646 } 00:10:19.646 ] 00:10:19.646 }' 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.646 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.215 09:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.215 [2024-12-12 09:23:53.999495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.215 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.215 [2024-12-12 09:23:54.156628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.475 [2024-12-12 09:23:54.316677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:20.475 [2024-12-12 09:23:54.316736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.475 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.735 BaseBdev2 00:10:20.735 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.735 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:20.735 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 [ 00:10:20.736 { 00:10:20.736 "name": "BaseBdev2", 00:10:20.736 "aliases": [ 00:10:20.736 "393c0660-1498-4554-8fa6-055671c1ad53" 00:10:20.736 ], 00:10:20.736 "product_name": "Malloc disk", 00:10:20.736 "block_size": 512, 00:10:20.736 "num_blocks": 65536, 00:10:20.736 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:20.736 "assigned_rate_limits": { 00:10:20.736 "rw_ios_per_sec": 0, 00:10:20.736 "rw_mbytes_per_sec": 0, 00:10:20.736 "r_mbytes_per_sec": 0, 00:10:20.736 "w_mbytes_per_sec": 0 00:10:20.736 }, 00:10:20.736 "claimed": false, 00:10:20.736 "zoned": false, 00:10:20.736 "supported_io_types": { 00:10:20.736 "read": true, 00:10:20.736 "write": true, 00:10:20.736 "unmap": true, 00:10:20.736 "flush": true, 00:10:20.736 "reset": true, 00:10:20.736 "nvme_admin": false, 00:10:20.736 "nvme_io": false, 00:10:20.736 "nvme_io_md": false, 00:10:20.736 "write_zeroes": true, 00:10:20.736 "zcopy": true, 00:10:20.736 "get_zone_info": false, 00:10:20.736 "zone_management": false, 00:10:20.736 "zone_append": false, 00:10:20.736 "compare": false, 00:10:20.736 "compare_and_write": false, 00:10:20.736 "abort": true, 00:10:20.736 "seek_hole": false, 00:10:20.736 "seek_data": false, 00:10:20.736 "copy": true, 00:10:20.736 "nvme_iov_md": false 00:10:20.736 }, 00:10:20.736 "memory_domains": [ 00:10:20.736 { 00:10:20.736 "dma_device_id": "system", 00:10:20.736 "dma_device_type": 1 00:10:20.736 }, 00:10:20.736 { 00:10:20.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.736 "dma_device_type": 2 00:10:20.736 } 00:10:20.736 ], 00:10:20.736 "driver_specific": {} 00:10:20.736 } 00:10:20.736 ] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 BaseBdev3 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 [ 00:10:20.736 { 00:10:20.736 "name": "BaseBdev3", 00:10:20.736 "aliases": [ 00:10:20.736 "1867eecb-41cc-4f07-9420-2f0f71353ea3" 00:10:20.736 ], 00:10:20.736 "product_name": "Malloc disk", 00:10:20.736 "block_size": 512, 00:10:20.736 "num_blocks": 65536, 00:10:20.736 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:20.736 "assigned_rate_limits": { 00:10:20.736 "rw_ios_per_sec": 0, 00:10:20.736 "rw_mbytes_per_sec": 0, 00:10:20.736 "r_mbytes_per_sec": 0, 00:10:20.736 "w_mbytes_per_sec": 0 00:10:20.736 }, 00:10:20.736 "claimed": false, 00:10:20.736 "zoned": false, 00:10:20.736 "supported_io_types": { 00:10:20.736 "read": true, 00:10:20.736 "write": true, 00:10:20.736 "unmap": true, 00:10:20.736 "flush": true, 00:10:20.736 "reset": true, 00:10:20.736 "nvme_admin": false, 00:10:20.736 "nvme_io": false, 00:10:20.736 "nvme_io_md": false, 00:10:20.736 "write_zeroes": true, 00:10:20.736 "zcopy": true, 00:10:20.736 "get_zone_info": false, 00:10:20.736 "zone_management": false, 00:10:20.736 "zone_append": false, 00:10:20.736 "compare": false, 00:10:20.736 "compare_and_write": false, 00:10:20.736 "abort": true, 00:10:20.736 "seek_hole": false, 00:10:20.736 "seek_data": false, 00:10:20.736 "copy": true, 00:10:20.736 "nvme_iov_md": false 00:10:20.736 }, 00:10:20.736 "memory_domains": [ 00:10:20.736 { 00:10:20.736 "dma_device_id": "system", 00:10:20.736 "dma_device_type": 1 00:10:20.736 }, 00:10:20.736 { 00:10:20.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.736 "dma_device_type": 2 00:10:20.736 } 00:10:20.736 ], 00:10:20.736 "driver_specific": {} 00:10:20.736 } 00:10:20.736 ] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 BaseBdev4 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 [ 00:10:20.736 { 00:10:20.736 "name": "BaseBdev4", 00:10:20.737 "aliases": [ 00:10:20.737 "d1fe433c-093a-4ac6-801b-d36a847ee9eb" 00:10:20.737 ], 00:10:20.737 "product_name": "Malloc disk", 00:10:20.737 "block_size": 512, 00:10:20.737 "num_blocks": 65536, 00:10:20.737 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:20.737 "assigned_rate_limits": { 00:10:20.737 "rw_ios_per_sec": 0, 00:10:20.737 "rw_mbytes_per_sec": 0, 00:10:20.737 "r_mbytes_per_sec": 0, 00:10:20.737 "w_mbytes_per_sec": 0 00:10:20.737 }, 00:10:20.737 "claimed": false, 00:10:20.737 "zoned": false, 00:10:20.737 "supported_io_types": { 00:10:20.737 "read": true, 00:10:20.737 "write": true, 00:10:20.737 "unmap": true, 00:10:20.737 "flush": true, 00:10:20.737 "reset": true, 00:10:20.737 "nvme_admin": false, 00:10:20.737 "nvme_io": false, 00:10:20.737 "nvme_io_md": false, 00:10:20.737 "write_zeroes": true, 00:10:20.737 "zcopy": true, 00:10:20.737 "get_zone_info": false, 00:10:20.737 "zone_management": false, 00:10:20.737 "zone_append": false, 00:10:20.737 "compare": false, 00:10:20.737 "compare_and_write": false, 00:10:20.737 "abort": true, 00:10:20.737 "seek_hole": false, 00:10:20.737 "seek_data": false, 00:10:20.737 "copy": true, 00:10:20.737 "nvme_iov_md": false 00:10:20.737 }, 00:10:20.737 "memory_domains": [ 00:10:20.737 { 00:10:20.737 "dma_device_id": "system", 00:10:20.737 "dma_device_type": 1 00:10:20.737 }, 00:10:20.737 { 00:10:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.737 "dma_device_type": 2 00:10:20.737 } 00:10:20.737 ], 00:10:20.737 "driver_specific": {} 00:10:20.737 } 00:10:20.737 ] 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 [2024-12-12 09:23:54.732242] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.737 [2024-12-12 09:23:54.732336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.737 [2024-12-12 09:23:54.732381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.737 [2024-12-12 09:23:54.734478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.737 [2024-12-12 09:23:54.734572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.997 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.997 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.997 "name": "Existed_Raid", 00:10:20.997 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:20.997 "strip_size_kb": 64, 00:10:20.997 "state": "configuring", 00:10:20.997 "raid_level": "raid0", 00:10:20.997 "superblock": true, 00:10:20.997 "num_base_bdevs": 4, 00:10:20.997 "num_base_bdevs_discovered": 3, 00:10:20.997 "num_base_bdevs_operational": 4, 00:10:20.997 "base_bdevs_list": [ 00:10:20.997 { 00:10:20.997 "name": "BaseBdev1", 00:10:20.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.997 "is_configured": false, 00:10:20.997 "data_offset": 0, 00:10:20.997 "data_size": 0 00:10:20.997 }, 00:10:20.997 { 00:10:20.997 "name": "BaseBdev2", 00:10:20.997 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:20.997 "is_configured": true, 00:10:20.997 "data_offset": 2048, 00:10:20.997 "data_size": 63488 00:10:20.997 }, 00:10:20.997 { 00:10:20.997 "name": "BaseBdev3", 00:10:20.997 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:20.997 "is_configured": true, 00:10:20.997 "data_offset": 2048, 00:10:20.997 "data_size": 63488 00:10:20.997 }, 00:10:20.997 { 00:10:20.997 "name": "BaseBdev4", 00:10:20.997 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:20.997 "is_configured": true, 00:10:20.997 "data_offset": 2048, 00:10:20.997 "data_size": 63488 00:10:20.997 } 00:10:20.997 ] 00:10:20.997 }' 00:10:20.997 09:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.997 09:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.257 [2024-12-12 09:23:55.163649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.257 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.258 "name": "Existed_Raid", 00:10:21.258 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:21.258 "strip_size_kb": 64, 00:10:21.258 "state": "configuring", 00:10:21.258 "raid_level": "raid0", 00:10:21.258 "superblock": true, 00:10:21.258 "num_base_bdevs": 4, 00:10:21.258 "num_base_bdevs_discovered": 2, 00:10:21.258 "num_base_bdevs_operational": 4, 00:10:21.258 "base_bdevs_list": [ 00:10:21.258 { 00:10:21.258 "name": "BaseBdev1", 00:10:21.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.258 "is_configured": false, 00:10:21.258 "data_offset": 0, 00:10:21.258 "data_size": 0 00:10:21.258 }, 00:10:21.258 { 00:10:21.258 "name": null, 00:10:21.258 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:21.258 "is_configured": false, 00:10:21.258 "data_offset": 0, 00:10:21.258 "data_size": 63488 00:10:21.258 }, 00:10:21.258 { 00:10:21.258 "name": "BaseBdev3", 00:10:21.258 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:21.258 "is_configured": true, 00:10:21.258 "data_offset": 2048, 00:10:21.258 "data_size": 63488 00:10:21.258 }, 00:10:21.258 { 00:10:21.258 "name": "BaseBdev4", 00:10:21.258 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:21.258 "is_configured": true, 00:10:21.258 "data_offset": 2048, 00:10:21.258 "data_size": 63488 00:10:21.258 } 00:10:21.258 ] 00:10:21.258 }' 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.258 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 [2024-12-12 09:23:55.704011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.826 BaseBdev1 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 [ 00:10:21.826 { 00:10:21.826 "name": "BaseBdev1", 00:10:21.826 "aliases": [ 00:10:21.826 "8dc71206-6531-4d28-9160-332a4e782e76" 00:10:21.826 ], 00:10:21.826 "product_name": "Malloc disk", 00:10:21.826 "block_size": 512, 00:10:21.826 "num_blocks": 65536, 00:10:21.826 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:21.826 "assigned_rate_limits": { 00:10:21.826 "rw_ios_per_sec": 0, 00:10:21.826 "rw_mbytes_per_sec": 0, 00:10:21.826 "r_mbytes_per_sec": 0, 00:10:21.826 "w_mbytes_per_sec": 0 00:10:21.826 }, 00:10:21.826 "claimed": true, 00:10:21.826 "claim_type": "exclusive_write", 00:10:21.826 "zoned": false, 00:10:21.826 "supported_io_types": { 00:10:21.826 "read": true, 00:10:21.826 "write": true, 00:10:21.826 "unmap": true, 00:10:21.826 "flush": true, 00:10:21.826 "reset": true, 00:10:21.826 "nvme_admin": false, 00:10:21.826 "nvme_io": false, 00:10:21.826 "nvme_io_md": false, 00:10:21.826 "write_zeroes": true, 00:10:21.826 "zcopy": true, 00:10:21.826 "get_zone_info": false, 00:10:21.826 "zone_management": false, 00:10:21.826 "zone_append": false, 00:10:21.826 "compare": false, 00:10:21.826 "compare_and_write": false, 00:10:21.826 "abort": true, 00:10:21.826 "seek_hole": false, 00:10:21.826 "seek_data": false, 00:10:21.826 "copy": true, 00:10:21.826 "nvme_iov_md": false 00:10:21.826 }, 00:10:21.826 "memory_domains": [ 00:10:21.826 { 00:10:21.826 "dma_device_id": "system", 00:10:21.826 "dma_device_type": 1 00:10:21.826 }, 00:10:21.826 { 00:10:21.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.826 "dma_device_type": 2 00:10:21.826 } 00:10:21.826 ], 00:10:21.826 "driver_specific": {} 00:10:21.826 } 00:10:21.826 ] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.826 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.826 "name": "Existed_Raid", 00:10:21.826 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:21.826 "strip_size_kb": 64, 00:10:21.826 "state": "configuring", 00:10:21.826 "raid_level": "raid0", 00:10:21.826 "superblock": true, 00:10:21.826 "num_base_bdevs": 4, 00:10:21.826 "num_base_bdevs_discovered": 3, 00:10:21.826 "num_base_bdevs_operational": 4, 00:10:21.826 "base_bdevs_list": [ 00:10:21.826 { 00:10:21.827 "name": "BaseBdev1", 00:10:21.827 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:21.827 "is_configured": true, 00:10:21.827 "data_offset": 2048, 00:10:21.827 "data_size": 63488 00:10:21.827 }, 00:10:21.827 { 00:10:21.827 "name": null, 00:10:21.827 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:21.827 "is_configured": false, 00:10:21.827 "data_offset": 0, 00:10:21.827 "data_size": 63488 00:10:21.827 }, 00:10:21.827 { 00:10:21.827 "name": "BaseBdev3", 00:10:21.827 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:21.827 "is_configured": true, 00:10:21.827 "data_offset": 2048, 00:10:21.827 "data_size": 63488 00:10:21.827 }, 00:10:21.827 { 00:10:21.827 "name": "BaseBdev4", 00:10:21.827 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:21.827 "is_configured": true, 00:10:21.827 "data_offset": 2048, 00:10:21.827 "data_size": 63488 00:10:21.827 } 00:10:21.827 ] 00:10:21.827 }' 00:10:21.827 09:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.827 09:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 [2024-12-12 09:23:56.219155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.397 "name": "Existed_Raid", 00:10:22.397 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:22.397 "strip_size_kb": 64, 00:10:22.397 "state": "configuring", 00:10:22.397 "raid_level": "raid0", 00:10:22.397 "superblock": true, 00:10:22.397 "num_base_bdevs": 4, 00:10:22.397 "num_base_bdevs_discovered": 2, 00:10:22.397 "num_base_bdevs_operational": 4, 00:10:22.397 "base_bdevs_list": [ 00:10:22.397 { 00:10:22.397 "name": "BaseBdev1", 00:10:22.397 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:22.397 "is_configured": true, 00:10:22.397 "data_offset": 2048, 00:10:22.397 "data_size": 63488 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "name": null, 00:10:22.397 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:22.397 "is_configured": false, 00:10:22.397 "data_offset": 0, 00:10:22.397 "data_size": 63488 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "name": null, 00:10:22.397 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:22.397 "is_configured": false, 00:10:22.397 "data_offset": 0, 00:10:22.397 "data_size": 63488 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "name": "BaseBdev4", 00:10:22.397 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:22.397 "is_configured": true, 00:10:22.397 "data_offset": 2048, 00:10:22.397 "data_size": 63488 00:10:22.397 } 00:10:22.397 ] 00:10:22.397 }' 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.397 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.657 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.657 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.657 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.657 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.917 [2024-12-12 09:23:56.726297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.917 "name": "Existed_Raid", 00:10:22.917 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:22.917 "strip_size_kb": 64, 00:10:22.917 "state": "configuring", 00:10:22.917 "raid_level": "raid0", 00:10:22.917 "superblock": true, 00:10:22.917 "num_base_bdevs": 4, 00:10:22.917 "num_base_bdevs_discovered": 3, 00:10:22.917 "num_base_bdevs_operational": 4, 00:10:22.917 "base_bdevs_list": [ 00:10:22.917 { 00:10:22.917 "name": "BaseBdev1", 00:10:22.917 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:22.917 "is_configured": true, 00:10:22.917 "data_offset": 2048, 00:10:22.917 "data_size": 63488 00:10:22.917 }, 00:10:22.917 { 00:10:22.917 "name": null, 00:10:22.917 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:22.917 "is_configured": false, 00:10:22.917 "data_offset": 0, 00:10:22.917 "data_size": 63488 00:10:22.917 }, 00:10:22.917 { 00:10:22.917 "name": "BaseBdev3", 00:10:22.917 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:22.917 "is_configured": true, 00:10:22.917 "data_offset": 2048, 00:10:22.917 "data_size": 63488 00:10:22.917 }, 00:10:22.917 { 00:10:22.917 "name": "BaseBdev4", 00:10:22.917 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:22.917 "is_configured": true, 00:10:22.917 "data_offset": 2048, 00:10:22.917 "data_size": 63488 00:10:22.917 } 00:10:22.917 ] 00:10:22.917 }' 00:10:22.917 09:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.918 09:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.177 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.177 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.177 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.177 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.177 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.437 [2024-12-12 09:23:57.233475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.437 "name": "Existed_Raid", 00:10:23.437 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:23.437 "strip_size_kb": 64, 00:10:23.437 "state": "configuring", 00:10:23.437 "raid_level": "raid0", 00:10:23.437 "superblock": true, 00:10:23.437 "num_base_bdevs": 4, 00:10:23.437 "num_base_bdevs_discovered": 2, 00:10:23.437 "num_base_bdevs_operational": 4, 00:10:23.437 "base_bdevs_list": [ 00:10:23.437 { 00:10:23.437 "name": null, 00:10:23.437 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:23.437 "is_configured": false, 00:10:23.437 "data_offset": 0, 00:10:23.437 "data_size": 63488 00:10:23.437 }, 00:10:23.437 { 00:10:23.437 "name": null, 00:10:23.437 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:23.437 "is_configured": false, 00:10:23.437 "data_offset": 0, 00:10:23.437 "data_size": 63488 00:10:23.437 }, 00:10:23.437 { 00:10:23.437 "name": "BaseBdev3", 00:10:23.437 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:23.437 "is_configured": true, 00:10:23.437 "data_offset": 2048, 00:10:23.437 "data_size": 63488 00:10:23.437 }, 00:10:23.437 { 00:10:23.437 "name": "BaseBdev4", 00:10:23.437 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:23.437 "is_configured": true, 00:10:23.437 "data_offset": 2048, 00:10:23.437 "data_size": 63488 00:10:23.437 } 00:10:23.437 ] 00:10:23.437 }' 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.437 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 [2024-12-12 09:23:57.769151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.006 "name": "Existed_Raid", 00:10:24.006 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:24.006 "strip_size_kb": 64, 00:10:24.006 "state": "configuring", 00:10:24.006 "raid_level": "raid0", 00:10:24.006 "superblock": true, 00:10:24.006 "num_base_bdevs": 4, 00:10:24.006 "num_base_bdevs_discovered": 3, 00:10:24.006 "num_base_bdevs_operational": 4, 00:10:24.006 "base_bdevs_list": [ 00:10:24.006 { 00:10:24.006 "name": null, 00:10:24.006 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:24.006 "is_configured": false, 00:10:24.006 "data_offset": 0, 00:10:24.006 "data_size": 63488 00:10:24.006 }, 00:10:24.006 { 00:10:24.006 "name": "BaseBdev2", 00:10:24.006 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:24.006 "is_configured": true, 00:10:24.006 "data_offset": 2048, 00:10:24.006 "data_size": 63488 00:10:24.006 }, 00:10:24.006 { 00:10:24.006 "name": "BaseBdev3", 00:10:24.006 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:24.006 "is_configured": true, 00:10:24.006 "data_offset": 2048, 00:10:24.006 "data_size": 63488 00:10:24.006 }, 00:10:24.006 { 00:10:24.006 "name": "BaseBdev4", 00:10:24.006 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:24.006 "is_configured": true, 00:10:24.006 "data_offset": 2048, 00:10:24.006 "data_size": 63488 00:10:24.006 } 00:10:24.006 ] 00:10:24.006 }' 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.006 09:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8dc71206-6531-4d28-9160-332a4e782e76 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.266 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 [2024-12-12 09:23:58.323787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:24.526 [2024-12-12 09:23:58.324115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.526 [2024-12-12 09:23:58.324135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.526 [2024-12-12 09:23:58.324426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:24.526 NewBaseBdev 00:10:24.526 [2024-12-12 09:23:58.324582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.526 [2024-12-12 09:23:58.324594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:24.526 [2024-12-12 09:23:58.324720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 [ 00:10:24.526 { 00:10:24.526 "name": "NewBaseBdev", 00:10:24.526 "aliases": [ 00:10:24.526 "8dc71206-6531-4d28-9160-332a4e782e76" 00:10:24.526 ], 00:10:24.526 "product_name": "Malloc disk", 00:10:24.526 "block_size": 512, 00:10:24.526 "num_blocks": 65536, 00:10:24.526 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:24.526 "assigned_rate_limits": { 00:10:24.526 "rw_ios_per_sec": 0, 00:10:24.526 "rw_mbytes_per_sec": 0, 00:10:24.526 "r_mbytes_per_sec": 0, 00:10:24.526 "w_mbytes_per_sec": 0 00:10:24.526 }, 00:10:24.526 "claimed": true, 00:10:24.526 "claim_type": "exclusive_write", 00:10:24.526 "zoned": false, 00:10:24.526 "supported_io_types": { 00:10:24.526 "read": true, 00:10:24.526 "write": true, 00:10:24.526 "unmap": true, 00:10:24.526 "flush": true, 00:10:24.526 "reset": true, 00:10:24.526 "nvme_admin": false, 00:10:24.526 "nvme_io": false, 00:10:24.526 "nvme_io_md": false, 00:10:24.526 "write_zeroes": true, 00:10:24.526 "zcopy": true, 00:10:24.526 "get_zone_info": false, 00:10:24.526 "zone_management": false, 00:10:24.526 "zone_append": false, 00:10:24.526 "compare": false, 00:10:24.526 "compare_and_write": false, 00:10:24.526 "abort": true, 00:10:24.526 "seek_hole": false, 00:10:24.526 "seek_data": false, 00:10:24.526 "copy": true, 00:10:24.526 "nvme_iov_md": false 00:10:24.526 }, 00:10:24.526 "memory_domains": [ 00:10:24.526 { 00:10:24.526 "dma_device_id": "system", 00:10:24.526 "dma_device_type": 1 00:10:24.526 }, 00:10:24.526 { 00:10:24.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.526 "dma_device_type": 2 00:10:24.526 } 00:10:24.526 ], 00:10:24.526 "driver_specific": {} 00:10:24.526 } 00:10:24.526 ] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.526 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.526 "name": "Existed_Raid", 00:10:24.526 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:24.526 "strip_size_kb": 64, 00:10:24.526 "state": "online", 00:10:24.526 "raid_level": "raid0", 00:10:24.526 "superblock": true, 00:10:24.526 "num_base_bdevs": 4, 00:10:24.526 "num_base_bdevs_discovered": 4, 00:10:24.526 "num_base_bdevs_operational": 4, 00:10:24.526 "base_bdevs_list": [ 00:10:24.526 { 00:10:24.526 "name": "NewBaseBdev", 00:10:24.526 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:24.526 "is_configured": true, 00:10:24.526 "data_offset": 2048, 00:10:24.526 "data_size": 63488 00:10:24.526 }, 00:10:24.526 { 00:10:24.526 "name": "BaseBdev2", 00:10:24.526 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:24.526 "is_configured": true, 00:10:24.526 "data_offset": 2048, 00:10:24.526 "data_size": 63488 00:10:24.526 }, 00:10:24.526 { 00:10:24.526 "name": "BaseBdev3", 00:10:24.526 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:24.526 "is_configured": true, 00:10:24.526 "data_offset": 2048, 00:10:24.526 "data_size": 63488 00:10:24.526 }, 00:10:24.526 { 00:10:24.526 "name": "BaseBdev4", 00:10:24.526 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:24.526 "is_configured": true, 00:10:24.526 "data_offset": 2048, 00:10:24.526 "data_size": 63488 00:10:24.526 } 00:10:24.526 ] 00:10:24.526 }' 00:10:24.527 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.527 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.785 [2024-12-12 09:23:58.807393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.044 "name": "Existed_Raid", 00:10:25.044 "aliases": [ 00:10:25.044 "78be5d56-7a8e-48c9-af53-6cad5b2702e1" 00:10:25.044 ], 00:10:25.044 "product_name": "Raid Volume", 00:10:25.044 "block_size": 512, 00:10:25.044 "num_blocks": 253952, 00:10:25.044 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:25.044 "assigned_rate_limits": { 00:10:25.044 "rw_ios_per_sec": 0, 00:10:25.044 "rw_mbytes_per_sec": 0, 00:10:25.044 "r_mbytes_per_sec": 0, 00:10:25.044 "w_mbytes_per_sec": 0 00:10:25.044 }, 00:10:25.044 "claimed": false, 00:10:25.044 "zoned": false, 00:10:25.044 "supported_io_types": { 00:10:25.044 "read": true, 00:10:25.044 "write": true, 00:10:25.044 "unmap": true, 00:10:25.044 "flush": true, 00:10:25.044 "reset": true, 00:10:25.044 "nvme_admin": false, 00:10:25.044 "nvme_io": false, 00:10:25.044 "nvme_io_md": false, 00:10:25.044 "write_zeroes": true, 00:10:25.044 "zcopy": false, 00:10:25.044 "get_zone_info": false, 00:10:25.044 "zone_management": false, 00:10:25.044 "zone_append": false, 00:10:25.044 "compare": false, 00:10:25.044 "compare_and_write": false, 00:10:25.044 "abort": false, 00:10:25.044 "seek_hole": false, 00:10:25.044 "seek_data": false, 00:10:25.044 "copy": false, 00:10:25.044 "nvme_iov_md": false 00:10:25.044 }, 00:10:25.044 "memory_domains": [ 00:10:25.044 { 00:10:25.044 "dma_device_id": "system", 00:10:25.044 "dma_device_type": 1 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.044 "dma_device_type": 2 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "system", 00:10:25.044 "dma_device_type": 1 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.044 "dma_device_type": 2 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "system", 00:10:25.044 "dma_device_type": 1 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.044 "dma_device_type": 2 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "system", 00:10:25.044 "dma_device_type": 1 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.044 "dma_device_type": 2 00:10:25.044 } 00:10:25.044 ], 00:10:25.044 "driver_specific": { 00:10:25.044 "raid": { 00:10:25.044 "uuid": "78be5d56-7a8e-48c9-af53-6cad5b2702e1", 00:10:25.044 "strip_size_kb": 64, 00:10:25.044 "state": "online", 00:10:25.044 "raid_level": "raid0", 00:10:25.044 "superblock": true, 00:10:25.044 "num_base_bdevs": 4, 00:10:25.044 "num_base_bdevs_discovered": 4, 00:10:25.044 "num_base_bdevs_operational": 4, 00:10:25.044 "base_bdevs_list": [ 00:10:25.044 { 00:10:25.044 "name": "NewBaseBdev", 00:10:25.044 "uuid": "8dc71206-6531-4d28-9160-332a4e782e76", 00:10:25.044 "is_configured": true, 00:10:25.044 "data_offset": 2048, 00:10:25.044 "data_size": 63488 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "name": "BaseBdev2", 00:10:25.044 "uuid": "393c0660-1498-4554-8fa6-055671c1ad53", 00:10:25.044 "is_configured": true, 00:10:25.044 "data_offset": 2048, 00:10:25.044 "data_size": 63488 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "name": "BaseBdev3", 00:10:25.044 "uuid": "1867eecb-41cc-4f07-9420-2f0f71353ea3", 00:10:25.044 "is_configured": true, 00:10:25.044 "data_offset": 2048, 00:10:25.044 "data_size": 63488 00:10:25.044 }, 00:10:25.044 { 00:10:25.044 "name": "BaseBdev4", 00:10:25.044 "uuid": "d1fe433c-093a-4ac6-801b-d36a847ee9eb", 00:10:25.044 "is_configured": true, 00:10:25.044 "data_offset": 2048, 00:10:25.044 "data_size": 63488 00:10:25.044 } 00:10:25.044 ] 00:10:25.044 } 00:10:25.044 } 00:10:25.044 }' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:25.044 BaseBdev2 00:10:25.044 BaseBdev3 00:10:25.044 BaseBdev4' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.044 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 09:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.312 [2024-12-12 09:23:59.106471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.312 [2024-12-12 09:23:59.106505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.312 [2024-12-12 09:23:59.106591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.312 [2024-12-12 09:23:59.106670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.312 [2024-12-12 09:23:59.106680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71192 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71192 ']' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71192 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71192 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71192' 00:10:25.312 killing process with pid 71192 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71192 00:10:25.312 [2024-12-12 09:23:59.155069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.312 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71192 00:10:25.593 [2024-12-12 09:23:59.567727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.974 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:26.974 00:10:26.974 real 0m11.450s 00:10:26.974 user 0m17.921s 00:10:26.974 sys 0m2.190s 00:10:26.974 ************************************ 00:10:26.974 END TEST raid_state_function_test_sb 00:10:26.974 ************************************ 00:10:26.974 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.974 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.974 09:24:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:26.974 09:24:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.974 09:24:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.974 09:24:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.974 ************************************ 00:10:26.974 START TEST raid_superblock_test 00:10:26.974 ************************************ 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71865 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71865 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71865 ']' 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.974 09:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.974 [2024-12-12 09:24:00.917211] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:26.974 [2024-12-12 09:24:00.917326] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71865 ] 00:10:27.234 [2024-12-12 09:24:01.090126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.234 [2024-12-12 09:24:01.220580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.494 [2024-12-12 09:24:01.434492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.494 [2024-12-12 09:24:01.434526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.753 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 malloc1 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 [2024-12-12 09:24:01.783876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.013 [2024-12-12 09:24:01.784022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.013 [2024-12-12 09:24:01.784070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:28.013 [2024-12-12 09:24:01.784103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.013 [2024-12-12 09:24:01.786519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.013 [2024-12-12 09:24:01.786587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.013 pt1 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 malloc2 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 [2024-12-12 09:24:01.844600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.013 [2024-12-12 09:24:01.844696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.013 [2024-12-12 09:24:01.844753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:28.013 [2024-12-12 09:24:01.844787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.013 [2024-12-12 09:24:01.847143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.013 [2024-12-12 09:24:01.847206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.013 pt2 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 malloc3 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.013 [2024-12-12 09:24:01.917408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.013 [2024-12-12 09:24:01.917514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.013 [2024-12-12 09:24:01.917555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:28.013 [2024-12-12 09:24:01.917585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.013 [2024-12-12 09:24:01.920022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.013 [2024-12-12 09:24:01.920108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.013 pt3 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.013 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.014 malloc4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.014 [2024-12-12 09:24:01.982772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:28.014 [2024-12-12 09:24:01.982881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.014 [2024-12-12 09:24:01.982921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:28.014 [2024-12-12 09:24:01.982986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.014 [2024-12-12 09:24:01.985367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.014 [2024-12-12 09:24:01.985434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:28.014 pt4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.014 [2024-12-12 09:24:01.994788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.014 [2024-12-12 09:24:01.996736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.014 [2024-12-12 09:24:01.996813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.014 [2024-12-12 09:24:01.996859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:28.014 [2024-12-12 09:24:01.997090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.014 [2024-12-12 09:24:01.997102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.014 [2024-12-12 09:24:01.997377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.014 [2024-12-12 09:24:01.997581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.014 [2024-12-12 09:24:01.997594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:28.014 [2024-12-12 09:24:01.997739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.014 09:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.014 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.273 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.273 "name": "raid_bdev1", 00:10:28.273 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:28.273 "strip_size_kb": 64, 00:10:28.273 "state": "online", 00:10:28.273 "raid_level": "raid0", 00:10:28.273 "superblock": true, 00:10:28.273 "num_base_bdevs": 4, 00:10:28.273 "num_base_bdevs_discovered": 4, 00:10:28.273 "num_base_bdevs_operational": 4, 00:10:28.273 "base_bdevs_list": [ 00:10:28.273 { 00:10:28.273 "name": "pt1", 00:10:28.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.273 "is_configured": true, 00:10:28.273 "data_offset": 2048, 00:10:28.273 "data_size": 63488 00:10:28.273 }, 00:10:28.273 { 00:10:28.273 "name": "pt2", 00:10:28.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.273 "is_configured": true, 00:10:28.273 "data_offset": 2048, 00:10:28.273 "data_size": 63488 00:10:28.273 }, 00:10:28.273 { 00:10:28.273 "name": "pt3", 00:10:28.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.273 "is_configured": true, 00:10:28.273 "data_offset": 2048, 00:10:28.273 "data_size": 63488 00:10:28.273 }, 00:10:28.273 { 00:10:28.273 "name": "pt4", 00:10:28.273 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.273 "is_configured": true, 00:10:28.273 "data_offset": 2048, 00:10:28.273 "data_size": 63488 00:10:28.273 } 00:10:28.273 ] 00:10:28.273 }' 00:10:28.273 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.273 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.532 [2024-12-12 09:24:02.450272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.532 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.532 "name": "raid_bdev1", 00:10:28.532 "aliases": [ 00:10:28.532 "8470ca23-bed4-4071-9b2b-e2957f087825" 00:10:28.532 ], 00:10:28.532 "product_name": "Raid Volume", 00:10:28.532 "block_size": 512, 00:10:28.532 "num_blocks": 253952, 00:10:28.532 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:28.532 "assigned_rate_limits": { 00:10:28.532 "rw_ios_per_sec": 0, 00:10:28.532 "rw_mbytes_per_sec": 0, 00:10:28.532 "r_mbytes_per_sec": 0, 00:10:28.532 "w_mbytes_per_sec": 0 00:10:28.532 }, 00:10:28.532 "claimed": false, 00:10:28.532 "zoned": false, 00:10:28.532 "supported_io_types": { 00:10:28.532 "read": true, 00:10:28.532 "write": true, 00:10:28.532 "unmap": true, 00:10:28.532 "flush": true, 00:10:28.532 "reset": true, 00:10:28.532 "nvme_admin": false, 00:10:28.532 "nvme_io": false, 00:10:28.532 "nvme_io_md": false, 00:10:28.532 "write_zeroes": true, 00:10:28.532 "zcopy": false, 00:10:28.533 "get_zone_info": false, 00:10:28.533 "zone_management": false, 00:10:28.533 "zone_append": false, 00:10:28.533 "compare": false, 00:10:28.533 "compare_and_write": false, 00:10:28.533 "abort": false, 00:10:28.533 "seek_hole": false, 00:10:28.533 "seek_data": false, 00:10:28.533 "copy": false, 00:10:28.533 "nvme_iov_md": false 00:10:28.533 }, 00:10:28.533 "memory_domains": [ 00:10:28.533 { 00:10:28.533 "dma_device_id": "system", 00:10:28.533 "dma_device_type": 1 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.533 "dma_device_type": 2 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "system", 00:10:28.533 "dma_device_type": 1 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.533 "dma_device_type": 2 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "system", 00:10:28.533 "dma_device_type": 1 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.533 "dma_device_type": 2 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "system", 00:10:28.533 "dma_device_type": 1 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.533 "dma_device_type": 2 00:10:28.533 } 00:10:28.533 ], 00:10:28.533 "driver_specific": { 00:10:28.533 "raid": { 00:10:28.533 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:28.533 "strip_size_kb": 64, 00:10:28.533 "state": "online", 00:10:28.533 "raid_level": "raid0", 00:10:28.533 "superblock": true, 00:10:28.533 "num_base_bdevs": 4, 00:10:28.533 "num_base_bdevs_discovered": 4, 00:10:28.533 "num_base_bdevs_operational": 4, 00:10:28.533 "base_bdevs_list": [ 00:10:28.533 { 00:10:28.533 "name": "pt1", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 2048, 00:10:28.533 "data_size": 63488 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": "pt2", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 2048, 00:10:28.533 "data_size": 63488 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": "pt3", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 2048, 00:10:28.533 "data_size": 63488 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": "pt4", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 2048, 00:10:28.533 "data_size": 63488 00:10:28.533 } 00:10:28.533 ] 00:10:28.533 } 00:10:28.533 } 00:10:28.533 }' 00:10:28.533 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.533 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:28.533 pt2 00:10:28.533 pt3 00:10:28.533 pt4' 00:10:28.533 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 [2024-12-12 09:24:02.789617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.793 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8470ca23-bed4-4071-9b2b-e2957f087825 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8470ca23-bed4-4071-9b2b-e2957f087825 ']' 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 [2024-12-12 09:24:02.837262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.054 [2024-12-12 09:24:02.837286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.054 [2024-12-12 09:24:02.837368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.054 [2024-12-12 09:24:02.837436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.054 [2024-12-12 09:24:02.837450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.054 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.054 [2024-12-12 09:24:02.981061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:29.054 [2024-12-12 09:24:02.983169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:29.054 [2024-12-12 09:24:02.983216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:29.054 [2024-12-12 09:24:02.983248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:29.054 [2024-12-12 09:24:02.983298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:29.054 [2024-12-12 09:24:02.983344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:29.054 [2024-12-12 09:24:02.983362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:29.054 [2024-12-12 09:24:02.983379] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:29.054 [2024-12-12 09:24:02.983392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.054 [2024-12-12 09:24:02.983405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:29.054 request: 00:10:29.054 { 00:10:29.054 "name": "raid_bdev1", 00:10:29.054 "raid_level": "raid0", 00:10:29.054 "base_bdevs": [ 00:10:29.054 "malloc1", 00:10:29.054 "malloc2", 00:10:29.054 "malloc3", 00:10:29.054 "malloc4" 00:10:29.054 ], 00:10:29.054 "strip_size_kb": 64, 00:10:29.054 "superblock": false, 00:10:29.055 "method": "bdev_raid_create", 00:10:29.055 "req_id": 1 00:10:29.055 } 00:10:29.055 Got JSON-RPC error response 00:10:29.055 response: 00:10:29.055 { 00:10:29.055 "code": -17, 00:10:29.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:29.055 } 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.055 09:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.055 [2024-12-12 09:24:03.048912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.055 [2024-12-12 09:24:03.049053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.055 [2024-12-12 09:24:03.049090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:29.055 [2024-12-12 09:24:03.049126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.055 [2024-12-12 09:24:03.051596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.055 [2024-12-12 09:24:03.051683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.055 [2024-12-12 09:24:03.051784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:29.055 [2024-12-12 09:24:03.051868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.055 pt1 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.055 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.314 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.314 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.314 "name": "raid_bdev1", 00:10:29.314 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:29.314 "strip_size_kb": 64, 00:10:29.314 "state": "configuring", 00:10:29.314 "raid_level": "raid0", 00:10:29.314 "superblock": true, 00:10:29.314 "num_base_bdevs": 4, 00:10:29.314 "num_base_bdevs_discovered": 1, 00:10:29.314 "num_base_bdevs_operational": 4, 00:10:29.314 "base_bdevs_list": [ 00:10:29.314 { 00:10:29.314 "name": "pt1", 00:10:29.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.314 "is_configured": true, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 }, 00:10:29.314 { 00:10:29.314 "name": null, 00:10:29.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.314 "is_configured": false, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 }, 00:10:29.314 { 00:10:29.314 "name": null, 00:10:29.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.314 "is_configured": false, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 }, 00:10:29.314 { 00:10:29.314 "name": null, 00:10:29.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:29.314 "is_configured": false, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 } 00:10:29.314 ] 00:10:29.314 }' 00:10:29.314 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.314 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.573 [2024-12-12 09:24:03.468171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.573 [2024-12-12 09:24:03.468230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.573 [2024-12-12 09:24:03.468248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:29.573 [2024-12-12 09:24:03.468259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.573 [2024-12-12 09:24:03.468682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.573 [2024-12-12 09:24:03.468710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.573 [2024-12-12 09:24:03.468779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:29.573 [2024-12-12 09:24:03.468800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.573 pt2 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.573 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 [2024-12-12 09:24:03.476185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.574 "name": "raid_bdev1", 00:10:29.574 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:29.574 "strip_size_kb": 64, 00:10:29.574 "state": "configuring", 00:10:29.574 "raid_level": "raid0", 00:10:29.574 "superblock": true, 00:10:29.574 "num_base_bdevs": 4, 00:10:29.574 "num_base_bdevs_discovered": 1, 00:10:29.574 "num_base_bdevs_operational": 4, 00:10:29.574 "base_bdevs_list": [ 00:10:29.574 { 00:10:29.574 "name": "pt1", 00:10:29.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.574 "is_configured": true, 00:10:29.574 "data_offset": 2048, 00:10:29.574 "data_size": 63488 00:10:29.574 }, 00:10:29.574 { 00:10:29.574 "name": null, 00:10:29.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.574 "is_configured": false, 00:10:29.574 "data_offset": 0, 00:10:29.574 "data_size": 63488 00:10:29.574 }, 00:10:29.574 { 00:10:29.574 "name": null, 00:10:29.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.574 "is_configured": false, 00:10:29.574 "data_offset": 2048, 00:10:29.574 "data_size": 63488 00:10:29.574 }, 00:10:29.574 { 00:10:29.574 "name": null, 00:10:29.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:29.574 "is_configured": false, 00:10:29.574 "data_offset": 2048, 00:10:29.574 "data_size": 63488 00:10:29.574 } 00:10:29.574 ] 00:10:29.574 }' 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.574 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.143 [2024-12-12 09:24:03.923522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:30.143 [2024-12-12 09:24:03.923668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.143 [2024-12-12 09:24:03.923710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:30.143 [2024-12-12 09:24:03.923738] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.143 [2024-12-12 09:24:03.924333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.143 [2024-12-12 09:24:03.924392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:30.143 [2024-12-12 09:24:03.924523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:30.143 [2024-12-12 09:24:03.924580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.143 pt2 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.143 [2024-12-12 09:24:03.935427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:30.143 [2024-12-12 09:24:03.935524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.143 [2024-12-12 09:24:03.935560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:30.143 [2024-12-12 09:24:03.935594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.143 [2024-12-12 09:24:03.936024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.143 [2024-12-12 09:24:03.936076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:30.143 [2024-12-12 09:24:03.936172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:30.143 [2024-12-12 09:24:03.936228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:30.143 pt3 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.143 [2024-12-12 09:24:03.947387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:30.143 [2024-12-12 09:24:03.947477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.143 [2024-12-12 09:24:03.947508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:30.143 [2024-12-12 09:24:03.947533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.143 [2024-12-12 09:24:03.947946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.143 [2024-12-12 09:24:03.948008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:30.143 [2024-12-12 09:24:03.948097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:30.143 [2024-12-12 09:24:03.948149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:30.143 [2024-12-12 09:24:03.948306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:30.143 [2024-12-12 09:24:03.948343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.143 [2024-12-12 09:24:03.948620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:30.143 [2024-12-12 09:24:03.948808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:30.143 [2024-12-12 09:24:03.948826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:30.143 [2024-12-12 09:24:03.948984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.143 pt4 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.143 09:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.143 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.143 "name": "raid_bdev1", 00:10:30.143 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:30.143 "strip_size_kb": 64, 00:10:30.143 "state": "online", 00:10:30.143 "raid_level": "raid0", 00:10:30.143 "superblock": true, 00:10:30.143 "num_base_bdevs": 4, 00:10:30.143 "num_base_bdevs_discovered": 4, 00:10:30.143 "num_base_bdevs_operational": 4, 00:10:30.143 "base_bdevs_list": [ 00:10:30.143 { 00:10:30.143 "name": "pt1", 00:10:30.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.143 "is_configured": true, 00:10:30.143 "data_offset": 2048, 00:10:30.143 "data_size": 63488 00:10:30.143 }, 00:10:30.143 { 00:10:30.143 "name": "pt2", 00:10:30.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.143 "is_configured": true, 00:10:30.143 "data_offset": 2048, 00:10:30.143 "data_size": 63488 00:10:30.143 }, 00:10:30.143 { 00:10:30.143 "name": "pt3", 00:10:30.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.143 "is_configured": true, 00:10:30.143 "data_offset": 2048, 00:10:30.143 "data_size": 63488 00:10:30.143 }, 00:10:30.143 { 00:10:30.143 "name": "pt4", 00:10:30.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:30.144 "is_configured": true, 00:10:30.144 "data_offset": 2048, 00:10:30.144 "data_size": 63488 00:10:30.144 } 00:10:30.144 ] 00:10:30.144 }' 00:10:30.144 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.144 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.403 [2024-12-12 09:24:04.347059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.403 "name": "raid_bdev1", 00:10:30.403 "aliases": [ 00:10:30.403 "8470ca23-bed4-4071-9b2b-e2957f087825" 00:10:30.403 ], 00:10:30.403 "product_name": "Raid Volume", 00:10:30.403 "block_size": 512, 00:10:30.403 "num_blocks": 253952, 00:10:30.403 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:30.403 "assigned_rate_limits": { 00:10:30.403 "rw_ios_per_sec": 0, 00:10:30.403 "rw_mbytes_per_sec": 0, 00:10:30.403 "r_mbytes_per_sec": 0, 00:10:30.403 "w_mbytes_per_sec": 0 00:10:30.403 }, 00:10:30.403 "claimed": false, 00:10:30.403 "zoned": false, 00:10:30.403 "supported_io_types": { 00:10:30.403 "read": true, 00:10:30.403 "write": true, 00:10:30.403 "unmap": true, 00:10:30.403 "flush": true, 00:10:30.403 "reset": true, 00:10:30.403 "nvme_admin": false, 00:10:30.403 "nvme_io": false, 00:10:30.403 "nvme_io_md": false, 00:10:30.403 "write_zeroes": true, 00:10:30.403 "zcopy": false, 00:10:30.403 "get_zone_info": false, 00:10:30.403 "zone_management": false, 00:10:30.403 "zone_append": false, 00:10:30.403 "compare": false, 00:10:30.403 "compare_and_write": false, 00:10:30.403 "abort": false, 00:10:30.403 "seek_hole": false, 00:10:30.403 "seek_data": false, 00:10:30.403 "copy": false, 00:10:30.403 "nvme_iov_md": false 00:10:30.403 }, 00:10:30.403 "memory_domains": [ 00:10:30.403 { 00:10:30.403 "dma_device_id": "system", 00:10:30.403 "dma_device_type": 1 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.403 "dma_device_type": 2 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "system", 00:10:30.403 "dma_device_type": 1 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.403 "dma_device_type": 2 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "system", 00:10:30.403 "dma_device_type": 1 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.403 "dma_device_type": 2 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "system", 00:10:30.403 "dma_device_type": 1 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.403 "dma_device_type": 2 00:10:30.403 } 00:10:30.403 ], 00:10:30.403 "driver_specific": { 00:10:30.403 "raid": { 00:10:30.403 "uuid": "8470ca23-bed4-4071-9b2b-e2957f087825", 00:10:30.403 "strip_size_kb": 64, 00:10:30.403 "state": "online", 00:10:30.403 "raid_level": "raid0", 00:10:30.403 "superblock": true, 00:10:30.403 "num_base_bdevs": 4, 00:10:30.403 "num_base_bdevs_discovered": 4, 00:10:30.403 "num_base_bdevs_operational": 4, 00:10:30.403 "base_bdevs_list": [ 00:10:30.403 { 00:10:30.403 "name": "pt1", 00:10:30.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.403 "is_configured": true, 00:10:30.403 "data_offset": 2048, 00:10:30.403 "data_size": 63488 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "name": "pt2", 00:10:30.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.403 "is_configured": true, 00:10:30.403 "data_offset": 2048, 00:10:30.403 "data_size": 63488 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "name": "pt3", 00:10:30.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.403 "is_configured": true, 00:10:30.403 "data_offset": 2048, 00:10:30.403 "data_size": 63488 00:10:30.403 }, 00:10:30.403 { 00:10:30.403 "name": "pt4", 00:10:30.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:30.403 "is_configured": true, 00:10:30.403 "data_offset": 2048, 00:10:30.403 "data_size": 63488 00:10:30.403 } 00:10:30.403 ] 00:10:30.403 } 00:10:30.403 } 00:10:30.403 }' 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.403 pt2 00:10:30.403 pt3 00:10:30.403 pt4' 00:10:30.403 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:30.663 [2024-12-12 09:24:04.670407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.663 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8470ca23-bed4-4071-9b2b-e2957f087825 '!=' 8470ca23-bed4-4071-9b2b-e2957f087825 ']' 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71865 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71865 ']' 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71865 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71865 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71865' 00:10:30.923 killing process with pid 71865 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71865 00:10:30.923 [2024-12-12 09:24:04.758440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.923 [2024-12-12 09:24:04.758572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.923 09:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71865 00:10:30.923 [2024-12-12 09:24:04.758677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.923 [2024-12-12 09:24:04.758687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:31.189 [2024-12-12 09:24:05.171893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.568 09:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:32.568 00:10:32.568 real 0m5.521s 00:10:32.568 user 0m7.727s 00:10:32.568 sys 0m1.035s 00:10:32.568 09:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.568 09:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 ************************************ 00:10:32.568 END TEST raid_superblock_test 00:10:32.568 ************************************ 00:10:32.568 09:24:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:32.568 09:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.568 09:24:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.568 09:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 ************************************ 00:10:32.568 START TEST raid_read_error_test 00:10:32.568 ************************************ 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1HjhwjKz5k 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72124 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72124 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72124 ']' 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.568 09:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 [2024-12-12 09:24:06.535239] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:32.568 [2024-12-12 09:24:06.535442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72124 ] 00:10:32.827 [2024-12-12 09:24:06.714159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.827 [2024-12-12 09:24:06.844193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.087 [2024-12-12 09:24:07.073859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.087 [2024-12-12 09:24:07.073969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.346 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 BaseBdev1_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 true 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 [2024-12-12 09:24:07.419042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.606 [2024-12-12 09:24:07.419152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.606 [2024-12-12 09:24:07.419177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:33.606 [2024-12-12 09:24:07.419189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.606 [2024-12-12 09:24:07.421675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.606 [2024-12-12 09:24:07.421715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.606 BaseBdev1 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 BaseBdev2_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 true 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 [2024-12-12 09:24:07.491304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.606 [2024-12-12 09:24:07.491356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.606 [2024-12-12 09:24:07.491371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:33.606 [2024-12-12 09:24:07.491383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.606 [2024-12-12 09:24:07.493784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.606 [2024-12-12 09:24:07.493823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.606 BaseBdev2 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 BaseBdev3_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 true 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.606 [2024-12-12 09:24:07.578697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:33.606 [2024-12-12 09:24:07.578747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.606 [2024-12-12 09:24:07.578765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:33.606 [2024-12-12 09:24:07.578777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.606 [2024-12-12 09:24:07.581232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.606 [2024-12-12 09:24:07.581326] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:33.606 BaseBdev3 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.606 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.866 BaseBdev4_malloc 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.866 true 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.866 [2024-12-12 09:24:07.650613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:33.866 [2024-12-12 09:24:07.650665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.866 [2024-12-12 09:24:07.650683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:33.866 [2024-12-12 09:24:07.650694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.866 [2024-12-12 09:24:07.653094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.866 [2024-12-12 09:24:07.653132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:33.866 BaseBdev4 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.866 [2024-12-12 09:24:07.662660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.866 [2024-12-12 09:24:07.664739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.866 [2024-12-12 09:24:07.664813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.866 [2024-12-12 09:24:07.664883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.866 [2024-12-12 09:24:07.665123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:33.866 [2024-12-12 09:24:07.665143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.866 [2024-12-12 09:24:07.665393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:33.866 [2024-12-12 09:24:07.665558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:33.866 [2024-12-12 09:24:07.665569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:33.866 [2024-12-12 09:24:07.665713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.866 "name": "raid_bdev1", 00:10:33.866 "uuid": "2e653916-1cbe-458b-8868-2a47784d8b71", 00:10:33.866 "strip_size_kb": 64, 00:10:33.866 "state": "online", 00:10:33.866 "raid_level": "raid0", 00:10:33.866 "superblock": true, 00:10:33.866 "num_base_bdevs": 4, 00:10:33.866 "num_base_bdevs_discovered": 4, 00:10:33.866 "num_base_bdevs_operational": 4, 00:10:33.866 "base_bdevs_list": [ 00:10:33.866 { 00:10:33.866 "name": "BaseBdev1", 00:10:33.866 "uuid": "ef510605-71ac-55d0-93d2-f127039020b1", 00:10:33.866 "is_configured": true, 00:10:33.866 "data_offset": 2048, 00:10:33.866 "data_size": 63488 00:10:33.866 }, 00:10:33.866 { 00:10:33.866 "name": "BaseBdev2", 00:10:33.866 "uuid": "f8d2840a-7e4e-5ab5-b50a-8a9b8aef5058", 00:10:33.866 "is_configured": true, 00:10:33.866 "data_offset": 2048, 00:10:33.866 "data_size": 63488 00:10:33.866 }, 00:10:33.866 { 00:10:33.866 "name": "BaseBdev3", 00:10:33.866 "uuid": "c95dab9d-943e-5a68-8007-0b4fe0e22c7d", 00:10:33.866 "is_configured": true, 00:10:33.866 "data_offset": 2048, 00:10:33.866 "data_size": 63488 00:10:33.866 }, 00:10:33.866 { 00:10:33.866 "name": "BaseBdev4", 00:10:33.866 "uuid": "3bb6d522-21ca-55c1-ad20-298206a82260", 00:10:33.866 "is_configured": true, 00:10:33.866 "data_offset": 2048, 00:10:33.866 "data_size": 63488 00:10:33.866 } 00:10:33.866 ] 00:10:33.866 }' 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.866 09:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.126 09:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:34.126 09:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:34.385 [2024-12-12 09:24:08.207177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.323 "name": "raid_bdev1", 00:10:35.323 "uuid": "2e653916-1cbe-458b-8868-2a47784d8b71", 00:10:35.323 "strip_size_kb": 64, 00:10:35.323 "state": "online", 00:10:35.323 "raid_level": "raid0", 00:10:35.323 "superblock": true, 00:10:35.323 "num_base_bdevs": 4, 00:10:35.323 "num_base_bdevs_discovered": 4, 00:10:35.323 "num_base_bdevs_operational": 4, 00:10:35.323 "base_bdevs_list": [ 00:10:35.323 { 00:10:35.323 "name": "BaseBdev1", 00:10:35.323 "uuid": "ef510605-71ac-55d0-93d2-f127039020b1", 00:10:35.323 "is_configured": true, 00:10:35.323 "data_offset": 2048, 00:10:35.323 "data_size": 63488 00:10:35.323 }, 00:10:35.323 { 00:10:35.323 "name": "BaseBdev2", 00:10:35.323 "uuid": "f8d2840a-7e4e-5ab5-b50a-8a9b8aef5058", 00:10:35.323 "is_configured": true, 00:10:35.323 "data_offset": 2048, 00:10:35.323 "data_size": 63488 00:10:35.323 }, 00:10:35.323 { 00:10:35.323 "name": "BaseBdev3", 00:10:35.323 "uuid": "c95dab9d-943e-5a68-8007-0b4fe0e22c7d", 00:10:35.323 "is_configured": true, 00:10:35.323 "data_offset": 2048, 00:10:35.323 "data_size": 63488 00:10:35.323 }, 00:10:35.323 { 00:10:35.323 "name": "BaseBdev4", 00:10:35.323 "uuid": "3bb6d522-21ca-55c1-ad20-298206a82260", 00:10:35.323 "is_configured": true, 00:10:35.323 "data_offset": 2048, 00:10:35.323 "data_size": 63488 00:10:35.323 } 00:10:35.323 ] 00:10:35.323 }' 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.323 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.585 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.585 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.585 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.585 [2024-12-12 09:24:09.591892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.585 [2024-12-12 09:24:09.591932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.585 [2024-12-12 09:24:09.594628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.585 [2024-12-12 09:24:09.594699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.585 [2024-12-12 09:24:09.594746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.585 [2024-12-12 09:24:09.594759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:35.585 { 00:10:35.585 "results": [ 00:10:35.585 { 00:10:35.585 "job": "raid_bdev1", 00:10:35.585 "core_mask": "0x1", 00:10:35.585 "workload": "randrw", 00:10:35.586 "percentage": 50, 00:10:35.586 "status": "finished", 00:10:35.586 "queue_depth": 1, 00:10:35.586 "io_size": 131072, 00:10:35.586 "runtime": 1.385302, 00:10:35.586 "iops": 13860.515613202031, 00:10:35.586 "mibps": 1732.564451650254, 00:10:35.586 "io_failed": 1, 00:10:35.586 "io_timeout": 0, 00:10:35.586 "avg_latency_us": 101.62210140956024, 00:10:35.586 "min_latency_us": 25.4882096069869, 00:10:35.586 "max_latency_us": 1337.907423580786 00:10:35.586 } 00:10:35.586 ], 00:10:35.586 "core_count": 1 00:10:35.586 } 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72124 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72124 ']' 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72124 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:35.586 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72124 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.848 killing process with pid 72124 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72124' 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72124 00:10:35.848 [2024-12-12 09:24:09.642052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.848 09:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72124 00:10:36.108 [2024-12-12 09:24:09.991704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1HjhwjKz5k 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:37.487 ************************************ 00:10:37.487 END TEST raid_read_error_test 00:10:37.487 ************************************ 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:37.487 00:10:37.487 real 0m4.835s 00:10:37.487 user 0m5.566s 00:10:37.487 sys 0m0.692s 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.487 09:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.487 09:24:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:37.487 09:24:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.487 09:24:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.487 09:24:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.487 ************************************ 00:10:37.487 START TEST raid_write_error_test 00:10:37.487 ************************************ 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:37.487 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wEJoj7HMa0 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72270 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72270 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72270 ']' 00:10:37.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.488 09:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.488 [2024-12-12 09:24:11.437537] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:37.488 [2024-12-12 09:24:11.437659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72270 ] 00:10:37.747 [2024-12-12 09:24:11.610476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.747 [2024-12-12 09:24:11.744643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.007 [2024-12-12 09:24:11.970015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.007 [2024-12-12 09:24:11.970058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.267 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.526 BaseBdev1_malloc 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.526 true 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.526 [2024-12-12 09:24:12.315520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.526 [2024-12-12 09:24:12.315601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.526 [2024-12-12 09:24:12.315623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:38.526 [2024-12-12 09:24:12.315634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.526 [2024-12-12 09:24:12.318017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.526 [2024-12-12 09:24:12.318052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.526 BaseBdev1 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.526 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 BaseBdev2_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 true 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 [2024-12-12 09:24:12.385840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:38.527 [2024-12-12 09:24:12.385891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.527 [2024-12-12 09:24:12.385906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:38.527 [2024-12-12 09:24:12.385916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.527 [2024-12-12 09:24:12.388358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.527 [2024-12-12 09:24:12.388394] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:38.527 BaseBdev2 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 BaseBdev3_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 true 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 [2024-12-12 09:24:12.492342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:38.527 [2024-12-12 09:24:12.492440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.527 [2024-12-12 09:24:12.492462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:38.527 [2024-12-12 09:24:12.492474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.527 [2024-12-12 09:24:12.494854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.527 [2024-12-12 09:24:12.494893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:38.527 BaseBdev3 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 BaseBdev4_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.527 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 true 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 [2024-12-12 09:24:12.559891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:38.787 [2024-12-12 09:24:12.559943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.787 [2024-12-12 09:24:12.559972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.787 [2024-12-12 09:24:12.559985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.787 [2024-12-12 09:24:12.562378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.787 [2024-12-12 09:24:12.562415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:38.787 BaseBdev4 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 [2024-12-12 09:24:12.567948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.787 [2024-12-12 09:24:12.570078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.787 [2024-12-12 09:24:12.570150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.787 [2024-12-12 09:24:12.570209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.787 [2024-12-12 09:24:12.570423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:38.787 [2024-12-12 09:24:12.570439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.787 [2024-12-12 09:24:12.570681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:38.787 [2024-12-12 09:24:12.570844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:38.787 [2024-12-12 09:24:12.570857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:38.787 [2024-12-12 09:24:12.571031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.787 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.787 "name": "raid_bdev1", 00:10:38.787 "uuid": "ff6e5ff4-3696-49cf-be86-61aa63ba4c84", 00:10:38.787 "strip_size_kb": 64, 00:10:38.787 "state": "online", 00:10:38.787 "raid_level": "raid0", 00:10:38.787 "superblock": true, 00:10:38.787 "num_base_bdevs": 4, 00:10:38.787 "num_base_bdevs_discovered": 4, 00:10:38.787 "num_base_bdevs_operational": 4, 00:10:38.787 "base_bdevs_list": [ 00:10:38.787 { 00:10:38.787 "name": "BaseBdev1", 00:10:38.787 "uuid": "44b2e95e-f394-5376-bd72-fdaa2a587606", 00:10:38.787 "is_configured": true, 00:10:38.787 "data_offset": 2048, 00:10:38.787 "data_size": 63488 00:10:38.787 }, 00:10:38.787 { 00:10:38.787 "name": "BaseBdev2", 00:10:38.787 "uuid": "af8f62ce-bd93-5476-a4e8-1606cd789010", 00:10:38.787 "is_configured": true, 00:10:38.787 "data_offset": 2048, 00:10:38.787 "data_size": 63488 00:10:38.787 }, 00:10:38.787 { 00:10:38.787 "name": "BaseBdev3", 00:10:38.787 "uuid": "33dbcbf2-dcde-505c-b2c7-451c8379bc46", 00:10:38.787 "is_configured": true, 00:10:38.787 "data_offset": 2048, 00:10:38.787 "data_size": 63488 00:10:38.787 }, 00:10:38.787 { 00:10:38.787 "name": "BaseBdev4", 00:10:38.788 "uuid": "e7050d7a-e87a-5a01-9441-14c0162b9751", 00:10:38.788 "is_configured": true, 00:10:38.788 "data_offset": 2048, 00:10:38.788 "data_size": 63488 00:10:38.788 } 00:10:38.788 ] 00:10:38.788 }' 00:10:38.788 09:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.788 09:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 09:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.047 09:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.307 [2024-12-12 09:24:13.096416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.245 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.246 "name": "raid_bdev1", 00:10:40.246 "uuid": "ff6e5ff4-3696-49cf-be86-61aa63ba4c84", 00:10:40.246 "strip_size_kb": 64, 00:10:40.246 "state": "online", 00:10:40.246 "raid_level": "raid0", 00:10:40.246 "superblock": true, 00:10:40.246 "num_base_bdevs": 4, 00:10:40.246 "num_base_bdevs_discovered": 4, 00:10:40.246 "num_base_bdevs_operational": 4, 00:10:40.246 "base_bdevs_list": [ 00:10:40.246 { 00:10:40.246 "name": "BaseBdev1", 00:10:40.246 "uuid": "44b2e95e-f394-5376-bd72-fdaa2a587606", 00:10:40.246 "is_configured": true, 00:10:40.246 "data_offset": 2048, 00:10:40.246 "data_size": 63488 00:10:40.246 }, 00:10:40.246 { 00:10:40.246 "name": "BaseBdev2", 00:10:40.246 "uuid": "af8f62ce-bd93-5476-a4e8-1606cd789010", 00:10:40.246 "is_configured": true, 00:10:40.246 "data_offset": 2048, 00:10:40.246 "data_size": 63488 00:10:40.246 }, 00:10:40.246 { 00:10:40.246 "name": "BaseBdev3", 00:10:40.246 "uuid": "33dbcbf2-dcde-505c-b2c7-451c8379bc46", 00:10:40.246 "is_configured": true, 00:10:40.246 "data_offset": 2048, 00:10:40.246 "data_size": 63488 00:10:40.246 }, 00:10:40.246 { 00:10:40.246 "name": "BaseBdev4", 00:10:40.246 "uuid": "e7050d7a-e87a-5a01-9441-14c0162b9751", 00:10:40.246 "is_configured": true, 00:10:40.246 "data_offset": 2048, 00:10:40.246 "data_size": 63488 00:10:40.246 } 00:10:40.246 ] 00:10:40.246 }' 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.246 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.506 [2024-12-12 09:24:14.469069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.506 [2024-12-12 09:24:14.469178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.506 [2024-12-12 09:24:14.471880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.506 [2024-12-12 09:24:14.471951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.506 [2024-12-12 09:24:14.472012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.506 [2024-12-12 09:24:14.472027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:40.506 { 00:10:40.506 "results": [ 00:10:40.506 { 00:10:40.506 "job": "raid_bdev1", 00:10:40.506 "core_mask": "0x1", 00:10:40.506 "workload": "randrw", 00:10:40.506 "percentage": 50, 00:10:40.506 "status": "finished", 00:10:40.506 "queue_depth": 1, 00:10:40.506 "io_size": 131072, 00:10:40.506 "runtime": 1.373378, 00:10:40.506 "iops": 13786.44480980473, 00:10:40.506 "mibps": 1723.3056012255913, 00:10:40.506 "io_failed": 1, 00:10:40.506 "io_timeout": 0, 00:10:40.506 "avg_latency_us": 102.15054536145836, 00:10:40.506 "min_latency_us": 25.041048034934498, 00:10:40.506 "max_latency_us": 1395.1441048034935 00:10:40.506 } 00:10:40.506 ], 00:10:40.506 "core_count": 1 00:10:40.506 } 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72270 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72270 ']' 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72270 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72270 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72270' 00:10:40.506 killing process with pid 72270 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72270 00:10:40.506 [2024-12-12 09:24:14.507666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.506 09:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72270 00:10:41.074 [2024-12-12 09:24:14.851929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wEJoj7HMa0 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:42.454 ************************************ 00:10:42.454 END TEST raid_write_error_test 00:10:42.454 ************************************ 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:42.454 00:10:42.454 real 0m4.786s 00:10:42.454 user 0m5.463s 00:10:42.454 sys 0m0.689s 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.454 09:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.454 09:24:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:42.454 09:24:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:42.454 09:24:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.454 09:24:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.454 09:24:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.454 ************************************ 00:10:42.454 START TEST raid_state_function_test 00:10:42.454 ************************************ 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.454 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72419 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72419' 00:10:42.455 Process raid pid: 72419 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72419 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72419 ']' 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.455 09:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.455 [2024-12-12 09:24:16.287209] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:42.455 [2024-12-12 09:24:16.287425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.455 [2024-12-12 09:24:16.460825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.714 [2024-12-12 09:24:16.594549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.973 [2024-12-12 09:24:16.823023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.973 [2024-12-12 09:24:16.823168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.233 [2024-12-12 09:24:17.118462] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.233 [2024-12-12 09:24:17.118518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.233 [2024-12-12 09:24:17.118528] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.233 [2024-12-12 09:24:17.118538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.233 [2024-12-12 09:24:17.118544] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.233 [2024-12-12 09:24:17.118553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.233 [2024-12-12 09:24:17.118559] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.233 [2024-12-12 09:24:17.118568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.233 "name": "Existed_Raid", 00:10:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.233 "strip_size_kb": 64, 00:10:43.233 "state": "configuring", 00:10:43.233 "raid_level": "concat", 00:10:43.233 "superblock": false, 00:10:43.233 "num_base_bdevs": 4, 00:10:43.233 "num_base_bdevs_discovered": 0, 00:10:43.233 "num_base_bdevs_operational": 4, 00:10:43.233 "base_bdevs_list": [ 00:10:43.233 { 00:10:43.233 "name": "BaseBdev1", 00:10:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.233 "is_configured": false, 00:10:43.233 "data_offset": 0, 00:10:43.233 "data_size": 0 00:10:43.233 }, 00:10:43.233 { 00:10:43.233 "name": "BaseBdev2", 00:10:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.233 "is_configured": false, 00:10:43.233 "data_offset": 0, 00:10:43.233 "data_size": 0 00:10:43.233 }, 00:10:43.233 { 00:10:43.233 "name": "BaseBdev3", 00:10:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.233 "is_configured": false, 00:10:43.233 "data_offset": 0, 00:10:43.233 "data_size": 0 00:10:43.233 }, 00:10:43.233 { 00:10:43.233 "name": "BaseBdev4", 00:10:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.233 "is_configured": false, 00:10:43.233 "data_offset": 0, 00:10:43.233 "data_size": 0 00:10:43.233 } 00:10:43.233 ] 00:10:43.233 }' 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.233 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.493 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.493 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.493 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 [2024-12-12 09:24:17.517754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.753 [2024-12-12 09:24:17.517863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 [2024-12-12 09:24:17.529726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.753 [2024-12-12 09:24:17.529827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.753 [2024-12-12 09:24:17.529856] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.753 [2024-12-12 09:24:17.529880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.753 [2024-12-12 09:24:17.529898] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.753 [2024-12-12 09:24:17.529920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.753 [2024-12-12 09:24:17.529937] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.753 [2024-12-12 09:24:17.529969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 [2024-12-12 09:24:17.581999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.753 BaseBdev1 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 [ 00:10:43.753 { 00:10:43.753 "name": "BaseBdev1", 00:10:43.753 "aliases": [ 00:10:43.753 "6c25ba2c-b06b-412f-ad06-d196c4540418" 00:10:43.753 ], 00:10:43.753 "product_name": "Malloc disk", 00:10:43.753 "block_size": 512, 00:10:43.753 "num_blocks": 65536, 00:10:43.753 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:43.753 "assigned_rate_limits": { 00:10:43.753 "rw_ios_per_sec": 0, 00:10:43.753 "rw_mbytes_per_sec": 0, 00:10:43.753 "r_mbytes_per_sec": 0, 00:10:43.753 "w_mbytes_per_sec": 0 00:10:43.753 }, 00:10:43.753 "claimed": true, 00:10:43.753 "claim_type": "exclusive_write", 00:10:43.753 "zoned": false, 00:10:43.753 "supported_io_types": { 00:10:43.753 "read": true, 00:10:43.753 "write": true, 00:10:43.753 "unmap": true, 00:10:43.753 "flush": true, 00:10:43.753 "reset": true, 00:10:43.753 "nvme_admin": false, 00:10:43.753 "nvme_io": false, 00:10:43.753 "nvme_io_md": false, 00:10:43.753 "write_zeroes": true, 00:10:43.753 "zcopy": true, 00:10:43.753 "get_zone_info": false, 00:10:43.753 "zone_management": false, 00:10:43.753 "zone_append": false, 00:10:43.753 "compare": false, 00:10:43.753 "compare_and_write": false, 00:10:43.753 "abort": true, 00:10:43.753 "seek_hole": false, 00:10:43.753 "seek_data": false, 00:10:43.753 "copy": true, 00:10:43.753 "nvme_iov_md": false 00:10:43.753 }, 00:10:43.753 "memory_domains": [ 00:10:43.753 { 00:10:43.753 "dma_device_id": "system", 00:10:43.753 "dma_device_type": 1 00:10:43.753 }, 00:10:43.753 { 00:10:43.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.753 "dma_device_type": 2 00:10:43.753 } 00:10:43.753 ], 00:10:43.753 "driver_specific": {} 00:10:43.753 } 00:10:43.753 ] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.753 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.753 "name": "Existed_Raid", 00:10:43.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.753 "strip_size_kb": 64, 00:10:43.753 "state": "configuring", 00:10:43.753 "raid_level": "concat", 00:10:43.753 "superblock": false, 00:10:43.753 "num_base_bdevs": 4, 00:10:43.753 "num_base_bdevs_discovered": 1, 00:10:43.753 "num_base_bdevs_operational": 4, 00:10:43.753 "base_bdevs_list": [ 00:10:43.754 { 00:10:43.754 "name": "BaseBdev1", 00:10:43.754 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:43.754 "is_configured": true, 00:10:43.754 "data_offset": 0, 00:10:43.754 "data_size": 65536 00:10:43.754 }, 00:10:43.754 { 00:10:43.754 "name": "BaseBdev2", 00:10:43.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.754 "is_configured": false, 00:10:43.754 "data_offset": 0, 00:10:43.754 "data_size": 0 00:10:43.754 }, 00:10:43.754 { 00:10:43.754 "name": "BaseBdev3", 00:10:43.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.754 "is_configured": false, 00:10:43.754 "data_offset": 0, 00:10:43.754 "data_size": 0 00:10:43.754 }, 00:10:43.754 { 00:10:43.754 "name": "BaseBdev4", 00:10:43.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.754 "is_configured": false, 00:10:43.754 "data_offset": 0, 00:10:43.754 "data_size": 0 00:10:43.754 } 00:10:43.754 ] 00:10:43.754 }' 00:10:43.754 09:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.754 09:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.323 [2024-12-12 09:24:18.081137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.323 [2024-12-12 09:24:18.081236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.323 [2024-12-12 09:24:18.093177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.323 [2024-12-12 09:24:18.095254] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.323 [2024-12-12 09:24:18.095296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.323 [2024-12-12 09:24:18.095306] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.323 [2024-12-12 09:24:18.095316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.323 [2024-12-12 09:24:18.095322] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.323 [2024-12-12 09:24:18.095330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.323 "name": "Existed_Raid", 00:10:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.323 "strip_size_kb": 64, 00:10:44.323 "state": "configuring", 00:10:44.323 "raid_level": "concat", 00:10:44.323 "superblock": false, 00:10:44.323 "num_base_bdevs": 4, 00:10:44.323 "num_base_bdevs_discovered": 1, 00:10:44.323 "num_base_bdevs_operational": 4, 00:10:44.323 "base_bdevs_list": [ 00:10:44.323 { 00:10:44.323 "name": "BaseBdev1", 00:10:44.323 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:44.323 "is_configured": true, 00:10:44.323 "data_offset": 0, 00:10:44.323 "data_size": 65536 00:10:44.323 }, 00:10:44.323 { 00:10:44.323 "name": "BaseBdev2", 00:10:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.323 "is_configured": false, 00:10:44.323 "data_offset": 0, 00:10:44.323 "data_size": 0 00:10:44.323 }, 00:10:44.323 { 00:10:44.323 "name": "BaseBdev3", 00:10:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.323 "is_configured": false, 00:10:44.323 "data_offset": 0, 00:10:44.323 "data_size": 0 00:10:44.323 }, 00:10:44.323 { 00:10:44.323 "name": "BaseBdev4", 00:10:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.323 "is_configured": false, 00:10:44.323 "data_offset": 0, 00:10:44.323 "data_size": 0 00:10:44.323 } 00:10:44.323 ] 00:10:44.323 }' 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.323 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.583 [2024-12-12 09:24:18.597188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.583 BaseBdev2 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.583 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.841 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.841 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.841 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.841 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.841 [ 00:10:44.841 { 00:10:44.841 "name": "BaseBdev2", 00:10:44.841 "aliases": [ 00:10:44.841 "75d72395-6416-4b43-98e9-a959d2c7ed16" 00:10:44.841 ], 00:10:44.841 "product_name": "Malloc disk", 00:10:44.841 "block_size": 512, 00:10:44.841 "num_blocks": 65536, 00:10:44.841 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:44.841 "assigned_rate_limits": { 00:10:44.841 "rw_ios_per_sec": 0, 00:10:44.841 "rw_mbytes_per_sec": 0, 00:10:44.841 "r_mbytes_per_sec": 0, 00:10:44.841 "w_mbytes_per_sec": 0 00:10:44.842 }, 00:10:44.842 "claimed": true, 00:10:44.842 "claim_type": "exclusive_write", 00:10:44.842 "zoned": false, 00:10:44.842 "supported_io_types": { 00:10:44.842 "read": true, 00:10:44.842 "write": true, 00:10:44.842 "unmap": true, 00:10:44.842 "flush": true, 00:10:44.842 "reset": true, 00:10:44.842 "nvme_admin": false, 00:10:44.842 "nvme_io": false, 00:10:44.842 "nvme_io_md": false, 00:10:44.842 "write_zeroes": true, 00:10:44.842 "zcopy": true, 00:10:44.842 "get_zone_info": false, 00:10:44.842 "zone_management": false, 00:10:44.842 "zone_append": false, 00:10:44.842 "compare": false, 00:10:44.842 "compare_and_write": false, 00:10:44.842 "abort": true, 00:10:44.842 "seek_hole": false, 00:10:44.842 "seek_data": false, 00:10:44.842 "copy": true, 00:10:44.842 "nvme_iov_md": false 00:10:44.842 }, 00:10:44.842 "memory_domains": [ 00:10:44.842 { 00:10:44.842 "dma_device_id": "system", 00:10:44.842 "dma_device_type": 1 00:10:44.842 }, 00:10:44.842 { 00:10:44.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.842 "dma_device_type": 2 00:10:44.842 } 00:10:44.842 ], 00:10:44.842 "driver_specific": {} 00:10:44.842 } 00:10:44.842 ] 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.842 "name": "Existed_Raid", 00:10:44.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.842 "strip_size_kb": 64, 00:10:44.842 "state": "configuring", 00:10:44.842 "raid_level": "concat", 00:10:44.842 "superblock": false, 00:10:44.842 "num_base_bdevs": 4, 00:10:44.842 "num_base_bdevs_discovered": 2, 00:10:44.842 "num_base_bdevs_operational": 4, 00:10:44.842 "base_bdevs_list": [ 00:10:44.842 { 00:10:44.842 "name": "BaseBdev1", 00:10:44.842 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:44.842 "is_configured": true, 00:10:44.842 "data_offset": 0, 00:10:44.842 "data_size": 65536 00:10:44.842 }, 00:10:44.842 { 00:10:44.842 "name": "BaseBdev2", 00:10:44.842 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:44.842 "is_configured": true, 00:10:44.842 "data_offset": 0, 00:10:44.842 "data_size": 65536 00:10:44.842 }, 00:10:44.842 { 00:10:44.842 "name": "BaseBdev3", 00:10:44.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.842 "is_configured": false, 00:10:44.842 "data_offset": 0, 00:10:44.842 "data_size": 0 00:10:44.842 }, 00:10:44.842 { 00:10:44.842 "name": "BaseBdev4", 00:10:44.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.842 "is_configured": false, 00:10:44.842 "data_offset": 0, 00:10:44.842 "data_size": 0 00:10:44.842 } 00:10:44.842 ] 00:10:44.842 }' 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.842 09:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.101 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.101 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.101 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.369 [2024-12-12 09:24:19.168527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.369 BaseBdev3 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.369 [ 00:10:45.369 { 00:10:45.369 "name": "BaseBdev3", 00:10:45.369 "aliases": [ 00:10:45.369 "2a0d53c4-6937-4b4b-947c-b17d1e426174" 00:10:45.369 ], 00:10:45.369 "product_name": "Malloc disk", 00:10:45.369 "block_size": 512, 00:10:45.369 "num_blocks": 65536, 00:10:45.369 "uuid": "2a0d53c4-6937-4b4b-947c-b17d1e426174", 00:10:45.369 "assigned_rate_limits": { 00:10:45.369 "rw_ios_per_sec": 0, 00:10:45.369 "rw_mbytes_per_sec": 0, 00:10:45.369 "r_mbytes_per_sec": 0, 00:10:45.369 "w_mbytes_per_sec": 0 00:10:45.369 }, 00:10:45.369 "claimed": true, 00:10:45.369 "claim_type": "exclusive_write", 00:10:45.369 "zoned": false, 00:10:45.369 "supported_io_types": { 00:10:45.369 "read": true, 00:10:45.369 "write": true, 00:10:45.369 "unmap": true, 00:10:45.369 "flush": true, 00:10:45.369 "reset": true, 00:10:45.369 "nvme_admin": false, 00:10:45.369 "nvme_io": false, 00:10:45.369 "nvme_io_md": false, 00:10:45.369 "write_zeroes": true, 00:10:45.369 "zcopy": true, 00:10:45.369 "get_zone_info": false, 00:10:45.369 "zone_management": false, 00:10:45.369 "zone_append": false, 00:10:45.369 "compare": false, 00:10:45.369 "compare_and_write": false, 00:10:45.369 "abort": true, 00:10:45.369 "seek_hole": false, 00:10:45.369 "seek_data": false, 00:10:45.369 "copy": true, 00:10:45.369 "nvme_iov_md": false 00:10:45.369 }, 00:10:45.369 "memory_domains": [ 00:10:45.369 { 00:10:45.369 "dma_device_id": "system", 00:10:45.369 "dma_device_type": 1 00:10:45.369 }, 00:10:45.369 { 00:10:45.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.369 "dma_device_type": 2 00:10:45.369 } 00:10:45.369 ], 00:10:45.369 "driver_specific": {} 00:10:45.369 } 00:10:45.369 ] 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.369 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.370 "name": "Existed_Raid", 00:10:45.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.370 "strip_size_kb": 64, 00:10:45.370 "state": "configuring", 00:10:45.370 "raid_level": "concat", 00:10:45.370 "superblock": false, 00:10:45.370 "num_base_bdevs": 4, 00:10:45.370 "num_base_bdevs_discovered": 3, 00:10:45.370 "num_base_bdevs_operational": 4, 00:10:45.370 "base_bdevs_list": [ 00:10:45.370 { 00:10:45.370 "name": "BaseBdev1", 00:10:45.370 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:45.370 "is_configured": true, 00:10:45.370 "data_offset": 0, 00:10:45.370 "data_size": 65536 00:10:45.370 }, 00:10:45.370 { 00:10:45.370 "name": "BaseBdev2", 00:10:45.370 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:45.370 "is_configured": true, 00:10:45.370 "data_offset": 0, 00:10:45.370 "data_size": 65536 00:10:45.370 }, 00:10:45.370 { 00:10:45.370 "name": "BaseBdev3", 00:10:45.370 "uuid": "2a0d53c4-6937-4b4b-947c-b17d1e426174", 00:10:45.370 "is_configured": true, 00:10:45.370 "data_offset": 0, 00:10:45.370 "data_size": 65536 00:10:45.370 }, 00:10:45.370 { 00:10:45.370 "name": "BaseBdev4", 00:10:45.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.370 "is_configured": false, 00:10:45.370 "data_offset": 0, 00:10:45.370 "data_size": 0 00:10:45.370 } 00:10:45.370 ] 00:10:45.370 }' 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.370 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.955 [2024-12-12 09:24:19.718729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.955 [2024-12-12 09:24:19.718781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.955 [2024-12-12 09:24:19.718790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:45.955 [2024-12-12 09:24:19.719134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:45.955 [2024-12-12 09:24:19.719324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.955 [2024-12-12 09:24:19.719343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:45.955 [2024-12-12 09:24:19.719638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.955 BaseBdev4 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.955 [ 00:10:45.955 { 00:10:45.955 "name": "BaseBdev4", 00:10:45.955 "aliases": [ 00:10:45.955 "29120ab0-9fe8-4a0b-a159-1395c8e75eb8" 00:10:45.955 ], 00:10:45.955 "product_name": "Malloc disk", 00:10:45.955 "block_size": 512, 00:10:45.955 "num_blocks": 65536, 00:10:45.955 "uuid": "29120ab0-9fe8-4a0b-a159-1395c8e75eb8", 00:10:45.955 "assigned_rate_limits": { 00:10:45.955 "rw_ios_per_sec": 0, 00:10:45.955 "rw_mbytes_per_sec": 0, 00:10:45.955 "r_mbytes_per_sec": 0, 00:10:45.955 "w_mbytes_per_sec": 0 00:10:45.955 }, 00:10:45.955 "claimed": true, 00:10:45.955 "claim_type": "exclusive_write", 00:10:45.955 "zoned": false, 00:10:45.955 "supported_io_types": { 00:10:45.955 "read": true, 00:10:45.955 "write": true, 00:10:45.955 "unmap": true, 00:10:45.955 "flush": true, 00:10:45.955 "reset": true, 00:10:45.955 "nvme_admin": false, 00:10:45.955 "nvme_io": false, 00:10:45.955 "nvme_io_md": false, 00:10:45.955 "write_zeroes": true, 00:10:45.955 "zcopy": true, 00:10:45.955 "get_zone_info": false, 00:10:45.955 "zone_management": false, 00:10:45.955 "zone_append": false, 00:10:45.955 "compare": false, 00:10:45.955 "compare_and_write": false, 00:10:45.955 "abort": true, 00:10:45.955 "seek_hole": false, 00:10:45.955 "seek_data": false, 00:10:45.955 "copy": true, 00:10:45.955 "nvme_iov_md": false 00:10:45.955 }, 00:10:45.955 "memory_domains": [ 00:10:45.955 { 00:10:45.955 "dma_device_id": "system", 00:10:45.955 "dma_device_type": 1 00:10:45.955 }, 00:10:45.955 { 00:10:45.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.955 "dma_device_type": 2 00:10:45.955 } 00:10:45.955 ], 00:10:45.955 "driver_specific": {} 00:10:45.955 } 00:10:45.955 ] 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.955 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.956 "name": "Existed_Raid", 00:10:45.956 "uuid": "19eba3b7-906f-4460-ba0a-e19064f6c8bb", 00:10:45.956 "strip_size_kb": 64, 00:10:45.956 "state": "online", 00:10:45.956 "raid_level": "concat", 00:10:45.956 "superblock": false, 00:10:45.956 "num_base_bdevs": 4, 00:10:45.956 "num_base_bdevs_discovered": 4, 00:10:45.956 "num_base_bdevs_operational": 4, 00:10:45.956 "base_bdevs_list": [ 00:10:45.956 { 00:10:45.956 "name": "BaseBdev1", 00:10:45.956 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:45.956 "is_configured": true, 00:10:45.956 "data_offset": 0, 00:10:45.956 "data_size": 65536 00:10:45.956 }, 00:10:45.956 { 00:10:45.956 "name": "BaseBdev2", 00:10:45.956 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:45.956 "is_configured": true, 00:10:45.956 "data_offset": 0, 00:10:45.956 "data_size": 65536 00:10:45.956 }, 00:10:45.956 { 00:10:45.956 "name": "BaseBdev3", 00:10:45.956 "uuid": "2a0d53c4-6937-4b4b-947c-b17d1e426174", 00:10:45.956 "is_configured": true, 00:10:45.956 "data_offset": 0, 00:10:45.956 "data_size": 65536 00:10:45.956 }, 00:10:45.956 { 00:10:45.956 "name": "BaseBdev4", 00:10:45.956 "uuid": "29120ab0-9fe8-4a0b-a159-1395c8e75eb8", 00:10:45.956 "is_configured": true, 00:10:45.956 "data_offset": 0, 00:10:45.956 "data_size": 65536 00:10:45.956 } 00:10:45.956 ] 00:10:45.956 }' 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.956 09:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.214 [2024-12-12 09:24:20.206344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.214 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.473 "name": "Existed_Raid", 00:10:46.473 "aliases": [ 00:10:46.473 "19eba3b7-906f-4460-ba0a-e19064f6c8bb" 00:10:46.473 ], 00:10:46.473 "product_name": "Raid Volume", 00:10:46.473 "block_size": 512, 00:10:46.473 "num_blocks": 262144, 00:10:46.473 "uuid": "19eba3b7-906f-4460-ba0a-e19064f6c8bb", 00:10:46.473 "assigned_rate_limits": { 00:10:46.473 "rw_ios_per_sec": 0, 00:10:46.473 "rw_mbytes_per_sec": 0, 00:10:46.473 "r_mbytes_per_sec": 0, 00:10:46.473 "w_mbytes_per_sec": 0 00:10:46.473 }, 00:10:46.473 "claimed": false, 00:10:46.473 "zoned": false, 00:10:46.473 "supported_io_types": { 00:10:46.473 "read": true, 00:10:46.473 "write": true, 00:10:46.473 "unmap": true, 00:10:46.473 "flush": true, 00:10:46.473 "reset": true, 00:10:46.473 "nvme_admin": false, 00:10:46.473 "nvme_io": false, 00:10:46.473 "nvme_io_md": false, 00:10:46.473 "write_zeroes": true, 00:10:46.473 "zcopy": false, 00:10:46.473 "get_zone_info": false, 00:10:46.473 "zone_management": false, 00:10:46.473 "zone_append": false, 00:10:46.473 "compare": false, 00:10:46.473 "compare_and_write": false, 00:10:46.473 "abort": false, 00:10:46.473 "seek_hole": false, 00:10:46.473 "seek_data": false, 00:10:46.473 "copy": false, 00:10:46.473 "nvme_iov_md": false 00:10:46.473 }, 00:10:46.473 "memory_domains": [ 00:10:46.473 { 00:10:46.473 "dma_device_id": "system", 00:10:46.473 "dma_device_type": 1 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.473 "dma_device_type": 2 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "system", 00:10:46.473 "dma_device_type": 1 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.473 "dma_device_type": 2 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "system", 00:10:46.473 "dma_device_type": 1 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.473 "dma_device_type": 2 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "system", 00:10:46.473 "dma_device_type": 1 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.473 "dma_device_type": 2 00:10:46.473 } 00:10:46.473 ], 00:10:46.473 "driver_specific": { 00:10:46.473 "raid": { 00:10:46.473 "uuid": "19eba3b7-906f-4460-ba0a-e19064f6c8bb", 00:10:46.473 "strip_size_kb": 64, 00:10:46.473 "state": "online", 00:10:46.473 "raid_level": "concat", 00:10:46.473 "superblock": false, 00:10:46.473 "num_base_bdevs": 4, 00:10:46.473 "num_base_bdevs_discovered": 4, 00:10:46.473 "num_base_bdevs_operational": 4, 00:10:46.473 "base_bdevs_list": [ 00:10:46.473 { 00:10:46.473 "name": "BaseBdev1", 00:10:46.473 "uuid": "6c25ba2c-b06b-412f-ad06-d196c4540418", 00:10:46.473 "is_configured": true, 00:10:46.473 "data_offset": 0, 00:10:46.473 "data_size": 65536 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "name": "BaseBdev2", 00:10:46.473 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:46.473 "is_configured": true, 00:10:46.473 "data_offset": 0, 00:10:46.473 "data_size": 65536 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "name": "BaseBdev3", 00:10:46.473 "uuid": "2a0d53c4-6937-4b4b-947c-b17d1e426174", 00:10:46.473 "is_configured": true, 00:10:46.473 "data_offset": 0, 00:10:46.473 "data_size": 65536 00:10:46.473 }, 00:10:46.473 { 00:10:46.473 "name": "BaseBdev4", 00:10:46.473 "uuid": "29120ab0-9fe8-4a0b-a159-1395c8e75eb8", 00:10:46.473 "is_configured": true, 00:10:46.473 "data_offset": 0, 00:10:46.473 "data_size": 65536 00:10:46.473 } 00:10:46.473 ] 00:10:46.473 } 00:10:46.473 } 00:10:46.473 }' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:46.473 BaseBdev2 00:10:46.473 BaseBdev3 00:10:46.473 BaseBdev4' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.473 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.473 [2024-12-12 09:24:20.477511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.473 [2024-12-12 09:24:20.477586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.473 [2024-12-12 09:24:20.477679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.731 "name": "Existed_Raid", 00:10:46.731 "uuid": "19eba3b7-906f-4460-ba0a-e19064f6c8bb", 00:10:46.731 "strip_size_kb": 64, 00:10:46.731 "state": "offline", 00:10:46.731 "raid_level": "concat", 00:10:46.731 "superblock": false, 00:10:46.731 "num_base_bdevs": 4, 00:10:46.731 "num_base_bdevs_discovered": 3, 00:10:46.731 "num_base_bdevs_operational": 3, 00:10:46.731 "base_bdevs_list": [ 00:10:46.731 { 00:10:46.731 "name": null, 00:10:46.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.731 "is_configured": false, 00:10:46.731 "data_offset": 0, 00:10:46.731 "data_size": 65536 00:10:46.731 }, 00:10:46.731 { 00:10:46.731 "name": "BaseBdev2", 00:10:46.731 "uuid": "75d72395-6416-4b43-98e9-a959d2c7ed16", 00:10:46.731 "is_configured": true, 00:10:46.731 "data_offset": 0, 00:10:46.731 "data_size": 65536 00:10:46.731 }, 00:10:46.731 { 00:10:46.731 "name": "BaseBdev3", 00:10:46.731 "uuid": "2a0d53c4-6937-4b4b-947c-b17d1e426174", 00:10:46.731 "is_configured": true, 00:10:46.731 "data_offset": 0, 00:10:46.731 "data_size": 65536 00:10:46.731 }, 00:10:46.731 { 00:10:46.731 "name": "BaseBdev4", 00:10:46.731 "uuid": "29120ab0-9fe8-4a0b-a159-1395c8e75eb8", 00:10:46.731 "is_configured": true, 00:10:46.731 "data_offset": 0, 00:10:46.731 "data_size": 65536 00:10:46.731 } 00:10:46.731 ] 00:10:46.731 }' 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.731 09:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 [2024-12-12 09:24:21.076679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:47.297 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.298 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.298 [2024-12-12 09:24:21.232594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.556 [2024-12-12 09:24:21.383305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:47.556 [2024-12-12 09:24:21.383367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.556 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 BaseBdev2 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 [ 00:10:47.815 { 00:10:47.815 "name": "BaseBdev2", 00:10:47.815 "aliases": [ 00:10:47.815 "91cd9173-30ef-4bda-9756-b387ffd0c281" 00:10:47.815 ], 00:10:47.815 "product_name": "Malloc disk", 00:10:47.815 "block_size": 512, 00:10:47.815 "num_blocks": 65536, 00:10:47.815 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:47.815 "assigned_rate_limits": { 00:10:47.815 "rw_ios_per_sec": 0, 00:10:47.815 "rw_mbytes_per_sec": 0, 00:10:47.815 "r_mbytes_per_sec": 0, 00:10:47.815 "w_mbytes_per_sec": 0 00:10:47.815 }, 00:10:47.815 "claimed": false, 00:10:47.815 "zoned": false, 00:10:47.815 "supported_io_types": { 00:10:47.815 "read": true, 00:10:47.815 "write": true, 00:10:47.815 "unmap": true, 00:10:47.815 "flush": true, 00:10:47.815 "reset": true, 00:10:47.815 "nvme_admin": false, 00:10:47.815 "nvme_io": false, 00:10:47.815 "nvme_io_md": false, 00:10:47.815 "write_zeroes": true, 00:10:47.815 "zcopy": true, 00:10:47.815 "get_zone_info": false, 00:10:47.815 "zone_management": false, 00:10:47.815 "zone_append": false, 00:10:47.815 "compare": false, 00:10:47.815 "compare_and_write": false, 00:10:47.815 "abort": true, 00:10:47.815 "seek_hole": false, 00:10:47.815 "seek_data": false, 00:10:47.815 "copy": true, 00:10:47.815 "nvme_iov_md": false 00:10:47.815 }, 00:10:47.815 "memory_domains": [ 00:10:47.815 { 00:10:47.815 "dma_device_id": "system", 00:10:47.815 "dma_device_type": 1 00:10:47.815 }, 00:10:47.815 { 00:10:47.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.815 "dma_device_type": 2 00:10:47.815 } 00:10:47.815 ], 00:10:47.815 "driver_specific": {} 00:10:47.815 } 00:10:47.815 ] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 BaseBdev3 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.815 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.815 [ 00:10:47.815 { 00:10:47.815 "name": "BaseBdev3", 00:10:47.815 "aliases": [ 00:10:47.815 "e7cb9587-c12f-4ebd-82c9-cd012a6654cd" 00:10:47.815 ], 00:10:47.815 "product_name": "Malloc disk", 00:10:47.815 "block_size": 512, 00:10:47.816 "num_blocks": 65536, 00:10:47.816 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:47.816 "assigned_rate_limits": { 00:10:47.816 "rw_ios_per_sec": 0, 00:10:47.816 "rw_mbytes_per_sec": 0, 00:10:47.816 "r_mbytes_per_sec": 0, 00:10:47.816 "w_mbytes_per_sec": 0 00:10:47.816 }, 00:10:47.816 "claimed": false, 00:10:47.816 "zoned": false, 00:10:47.816 "supported_io_types": { 00:10:47.816 "read": true, 00:10:47.816 "write": true, 00:10:47.816 "unmap": true, 00:10:47.816 "flush": true, 00:10:47.816 "reset": true, 00:10:47.816 "nvme_admin": false, 00:10:47.816 "nvme_io": false, 00:10:47.816 "nvme_io_md": false, 00:10:47.816 "write_zeroes": true, 00:10:47.816 "zcopy": true, 00:10:47.816 "get_zone_info": false, 00:10:47.816 "zone_management": false, 00:10:47.816 "zone_append": false, 00:10:47.816 "compare": false, 00:10:47.816 "compare_and_write": false, 00:10:47.816 "abort": true, 00:10:47.816 "seek_hole": false, 00:10:47.816 "seek_data": false, 00:10:47.816 "copy": true, 00:10:47.816 "nvme_iov_md": false 00:10:47.816 }, 00:10:47.816 "memory_domains": [ 00:10:47.816 { 00:10:47.816 "dma_device_id": "system", 00:10:47.816 "dma_device_type": 1 00:10:47.816 }, 00:10:47.816 { 00:10:47.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.816 "dma_device_type": 2 00:10:47.816 } 00:10:47.816 ], 00:10:47.816 "driver_specific": {} 00:10:47.816 } 00:10:47.816 ] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 BaseBdev4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 [ 00:10:47.816 { 00:10:47.816 "name": "BaseBdev4", 00:10:47.816 "aliases": [ 00:10:47.816 "b27841de-6f13-4ad9-a94d-86084310a562" 00:10:47.816 ], 00:10:47.816 "product_name": "Malloc disk", 00:10:47.816 "block_size": 512, 00:10:47.816 "num_blocks": 65536, 00:10:47.816 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:47.816 "assigned_rate_limits": { 00:10:47.816 "rw_ios_per_sec": 0, 00:10:47.816 "rw_mbytes_per_sec": 0, 00:10:47.816 "r_mbytes_per_sec": 0, 00:10:47.816 "w_mbytes_per_sec": 0 00:10:47.816 }, 00:10:47.816 "claimed": false, 00:10:47.816 "zoned": false, 00:10:47.816 "supported_io_types": { 00:10:47.816 "read": true, 00:10:47.816 "write": true, 00:10:47.816 "unmap": true, 00:10:47.816 "flush": true, 00:10:47.816 "reset": true, 00:10:47.816 "nvme_admin": false, 00:10:47.816 "nvme_io": false, 00:10:47.816 "nvme_io_md": false, 00:10:47.816 "write_zeroes": true, 00:10:47.816 "zcopy": true, 00:10:47.816 "get_zone_info": false, 00:10:47.816 "zone_management": false, 00:10:47.816 "zone_append": false, 00:10:47.816 "compare": false, 00:10:47.816 "compare_and_write": false, 00:10:47.816 "abort": true, 00:10:47.816 "seek_hole": false, 00:10:47.816 "seek_data": false, 00:10:47.816 "copy": true, 00:10:47.816 "nvme_iov_md": false 00:10:47.816 }, 00:10:47.816 "memory_domains": [ 00:10:47.816 { 00:10:47.816 "dma_device_id": "system", 00:10:47.816 "dma_device_type": 1 00:10:47.816 }, 00:10:47.816 { 00:10:47.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.816 "dma_device_type": 2 00:10:47.816 } 00:10:47.816 ], 00:10:47.816 "driver_specific": {} 00:10:47.816 } 00:10:47.816 ] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 [2024-12-12 09:24:21.789644] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.816 [2024-12-12 09:24:21.789748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.816 [2024-12-12 09:24:21.789792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.816 [2024-12-12 09:24:21.791940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.816 [2024-12-12 09:24:21.792069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.075 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.075 "name": "Existed_Raid", 00:10:48.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.075 "strip_size_kb": 64, 00:10:48.075 "state": "configuring", 00:10:48.075 "raid_level": "concat", 00:10:48.075 "superblock": false, 00:10:48.075 "num_base_bdevs": 4, 00:10:48.075 "num_base_bdevs_discovered": 3, 00:10:48.075 "num_base_bdevs_operational": 4, 00:10:48.075 "base_bdevs_list": [ 00:10:48.075 { 00:10:48.075 "name": "BaseBdev1", 00:10:48.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.075 "is_configured": false, 00:10:48.075 "data_offset": 0, 00:10:48.075 "data_size": 0 00:10:48.075 }, 00:10:48.075 { 00:10:48.075 "name": "BaseBdev2", 00:10:48.075 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:48.075 "is_configured": true, 00:10:48.075 "data_offset": 0, 00:10:48.075 "data_size": 65536 00:10:48.075 }, 00:10:48.075 { 00:10:48.075 "name": "BaseBdev3", 00:10:48.075 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:48.075 "is_configured": true, 00:10:48.075 "data_offset": 0, 00:10:48.075 "data_size": 65536 00:10:48.075 }, 00:10:48.075 { 00:10:48.075 "name": "BaseBdev4", 00:10:48.075 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:48.075 "is_configured": true, 00:10:48.075 "data_offset": 0, 00:10:48.075 "data_size": 65536 00:10:48.075 } 00:10:48.075 ] 00:10:48.075 }' 00:10:48.075 09:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.075 09:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.335 [2024-12-12 09:24:22.236859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.335 "name": "Existed_Raid", 00:10:48.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.335 "strip_size_kb": 64, 00:10:48.335 "state": "configuring", 00:10:48.335 "raid_level": "concat", 00:10:48.335 "superblock": false, 00:10:48.335 "num_base_bdevs": 4, 00:10:48.335 "num_base_bdevs_discovered": 2, 00:10:48.335 "num_base_bdevs_operational": 4, 00:10:48.335 "base_bdevs_list": [ 00:10:48.335 { 00:10:48.335 "name": "BaseBdev1", 00:10:48.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.335 "is_configured": false, 00:10:48.335 "data_offset": 0, 00:10:48.335 "data_size": 0 00:10:48.335 }, 00:10:48.335 { 00:10:48.335 "name": null, 00:10:48.335 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:48.335 "is_configured": false, 00:10:48.335 "data_offset": 0, 00:10:48.335 "data_size": 65536 00:10:48.335 }, 00:10:48.335 { 00:10:48.335 "name": "BaseBdev3", 00:10:48.335 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:48.335 "is_configured": true, 00:10:48.335 "data_offset": 0, 00:10:48.335 "data_size": 65536 00:10:48.335 }, 00:10:48.335 { 00:10:48.335 "name": "BaseBdev4", 00:10:48.335 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:48.335 "is_configured": true, 00:10:48.335 "data_offset": 0, 00:10:48.335 "data_size": 65536 00:10:48.335 } 00:10:48.335 ] 00:10:48.335 }' 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.335 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 [2024-12-12 09:24:22.785054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.903 BaseBdev1 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.903 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 [ 00:10:48.903 { 00:10:48.903 "name": "BaseBdev1", 00:10:48.903 "aliases": [ 00:10:48.903 "2e859137-c239-4db7-9ef5-278e41e97caf" 00:10:48.903 ], 00:10:48.903 "product_name": "Malloc disk", 00:10:48.903 "block_size": 512, 00:10:48.903 "num_blocks": 65536, 00:10:48.903 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:48.903 "assigned_rate_limits": { 00:10:48.903 "rw_ios_per_sec": 0, 00:10:48.903 "rw_mbytes_per_sec": 0, 00:10:48.903 "r_mbytes_per_sec": 0, 00:10:48.903 "w_mbytes_per_sec": 0 00:10:48.903 }, 00:10:48.903 "claimed": true, 00:10:48.903 "claim_type": "exclusive_write", 00:10:48.903 "zoned": false, 00:10:48.903 "supported_io_types": { 00:10:48.903 "read": true, 00:10:48.903 "write": true, 00:10:48.903 "unmap": true, 00:10:48.904 "flush": true, 00:10:48.904 "reset": true, 00:10:48.904 "nvme_admin": false, 00:10:48.904 "nvme_io": false, 00:10:48.904 "nvme_io_md": false, 00:10:48.904 "write_zeroes": true, 00:10:48.904 "zcopy": true, 00:10:48.904 "get_zone_info": false, 00:10:48.904 "zone_management": false, 00:10:48.904 "zone_append": false, 00:10:48.904 "compare": false, 00:10:48.904 "compare_and_write": false, 00:10:48.904 "abort": true, 00:10:48.904 "seek_hole": false, 00:10:48.904 "seek_data": false, 00:10:48.904 "copy": true, 00:10:48.904 "nvme_iov_md": false 00:10:48.904 }, 00:10:48.904 "memory_domains": [ 00:10:48.904 { 00:10:48.904 "dma_device_id": "system", 00:10:48.904 "dma_device_type": 1 00:10:48.904 }, 00:10:48.904 { 00:10:48.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.904 "dma_device_type": 2 00:10:48.904 } 00:10:48.904 ], 00:10:48.904 "driver_specific": {} 00:10:48.904 } 00:10:48.904 ] 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.904 "name": "Existed_Raid", 00:10:48.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.904 "strip_size_kb": 64, 00:10:48.904 "state": "configuring", 00:10:48.904 "raid_level": "concat", 00:10:48.904 "superblock": false, 00:10:48.904 "num_base_bdevs": 4, 00:10:48.904 "num_base_bdevs_discovered": 3, 00:10:48.904 "num_base_bdevs_operational": 4, 00:10:48.904 "base_bdevs_list": [ 00:10:48.904 { 00:10:48.904 "name": "BaseBdev1", 00:10:48.904 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:48.904 "is_configured": true, 00:10:48.904 "data_offset": 0, 00:10:48.904 "data_size": 65536 00:10:48.904 }, 00:10:48.904 { 00:10:48.904 "name": null, 00:10:48.904 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:48.904 "is_configured": false, 00:10:48.904 "data_offset": 0, 00:10:48.904 "data_size": 65536 00:10:48.904 }, 00:10:48.904 { 00:10:48.904 "name": "BaseBdev3", 00:10:48.904 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:48.904 "is_configured": true, 00:10:48.904 "data_offset": 0, 00:10:48.904 "data_size": 65536 00:10:48.904 }, 00:10:48.904 { 00:10:48.904 "name": "BaseBdev4", 00:10:48.904 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:48.904 "is_configured": true, 00:10:48.904 "data_offset": 0, 00:10:48.904 "data_size": 65536 00:10:48.904 } 00:10:48.904 ] 00:10:48.904 }' 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.904 09:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 [2024-12-12 09:24:23.324158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.473 "name": "Existed_Raid", 00:10:49.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.473 "strip_size_kb": 64, 00:10:49.473 "state": "configuring", 00:10:49.473 "raid_level": "concat", 00:10:49.473 "superblock": false, 00:10:49.473 "num_base_bdevs": 4, 00:10:49.473 "num_base_bdevs_discovered": 2, 00:10:49.473 "num_base_bdevs_operational": 4, 00:10:49.473 "base_bdevs_list": [ 00:10:49.473 { 00:10:49.473 "name": "BaseBdev1", 00:10:49.473 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:49.473 "is_configured": true, 00:10:49.473 "data_offset": 0, 00:10:49.473 "data_size": 65536 00:10:49.473 }, 00:10:49.473 { 00:10:49.473 "name": null, 00:10:49.473 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:49.473 "is_configured": false, 00:10:49.473 "data_offset": 0, 00:10:49.473 "data_size": 65536 00:10:49.473 }, 00:10:49.473 { 00:10:49.473 "name": null, 00:10:49.473 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:49.473 "is_configured": false, 00:10:49.473 "data_offset": 0, 00:10:49.473 "data_size": 65536 00:10:49.473 }, 00:10:49.473 { 00:10:49.473 "name": "BaseBdev4", 00:10:49.473 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:49.473 "is_configured": true, 00:10:49.473 "data_offset": 0, 00:10:49.473 "data_size": 65536 00:10:49.473 } 00:10:49.473 ] 00:10:49.473 }' 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.473 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.732 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.732 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.732 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.732 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.990 [2024-12-12 09:24:23.795370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.990 "name": "Existed_Raid", 00:10:49.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.990 "strip_size_kb": 64, 00:10:49.990 "state": "configuring", 00:10:49.990 "raid_level": "concat", 00:10:49.990 "superblock": false, 00:10:49.990 "num_base_bdevs": 4, 00:10:49.990 "num_base_bdevs_discovered": 3, 00:10:49.990 "num_base_bdevs_operational": 4, 00:10:49.990 "base_bdevs_list": [ 00:10:49.990 { 00:10:49.990 "name": "BaseBdev1", 00:10:49.990 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:49.990 "is_configured": true, 00:10:49.990 "data_offset": 0, 00:10:49.990 "data_size": 65536 00:10:49.990 }, 00:10:49.990 { 00:10:49.990 "name": null, 00:10:49.990 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:49.990 "is_configured": false, 00:10:49.990 "data_offset": 0, 00:10:49.990 "data_size": 65536 00:10:49.990 }, 00:10:49.990 { 00:10:49.990 "name": "BaseBdev3", 00:10:49.990 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:49.990 "is_configured": true, 00:10:49.990 "data_offset": 0, 00:10:49.990 "data_size": 65536 00:10:49.990 }, 00:10:49.990 { 00:10:49.990 "name": "BaseBdev4", 00:10:49.990 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:49.990 "is_configured": true, 00:10:49.990 "data_offset": 0, 00:10:49.990 "data_size": 65536 00:10:49.990 } 00:10:49.990 ] 00:10:49.990 }' 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.990 09:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.249 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.249 [2024-12-12 09:24:24.246639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.508 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.509 "name": "Existed_Raid", 00:10:50.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.509 "strip_size_kb": 64, 00:10:50.509 "state": "configuring", 00:10:50.509 "raid_level": "concat", 00:10:50.509 "superblock": false, 00:10:50.509 "num_base_bdevs": 4, 00:10:50.509 "num_base_bdevs_discovered": 2, 00:10:50.509 "num_base_bdevs_operational": 4, 00:10:50.509 "base_bdevs_list": [ 00:10:50.509 { 00:10:50.509 "name": null, 00:10:50.509 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:50.509 "is_configured": false, 00:10:50.509 "data_offset": 0, 00:10:50.509 "data_size": 65536 00:10:50.509 }, 00:10:50.509 { 00:10:50.509 "name": null, 00:10:50.509 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:50.509 "is_configured": false, 00:10:50.509 "data_offset": 0, 00:10:50.509 "data_size": 65536 00:10:50.509 }, 00:10:50.509 { 00:10:50.509 "name": "BaseBdev3", 00:10:50.509 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:50.509 "is_configured": true, 00:10:50.509 "data_offset": 0, 00:10:50.509 "data_size": 65536 00:10:50.509 }, 00:10:50.509 { 00:10:50.509 "name": "BaseBdev4", 00:10:50.509 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:50.509 "is_configured": true, 00:10:50.509 "data_offset": 0, 00:10:50.509 "data_size": 65536 00:10:50.509 } 00:10:50.509 ] 00:10:50.509 }' 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.509 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.768 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.768 [2024-12-12 09:24:24.786807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.027 "name": "Existed_Raid", 00:10:51.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.027 "strip_size_kb": 64, 00:10:51.027 "state": "configuring", 00:10:51.027 "raid_level": "concat", 00:10:51.027 "superblock": false, 00:10:51.027 "num_base_bdevs": 4, 00:10:51.027 "num_base_bdevs_discovered": 3, 00:10:51.027 "num_base_bdevs_operational": 4, 00:10:51.027 "base_bdevs_list": [ 00:10:51.027 { 00:10:51.027 "name": null, 00:10:51.027 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:51.027 "is_configured": false, 00:10:51.027 "data_offset": 0, 00:10:51.027 "data_size": 65536 00:10:51.027 }, 00:10:51.027 { 00:10:51.027 "name": "BaseBdev2", 00:10:51.027 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:51.027 "is_configured": true, 00:10:51.027 "data_offset": 0, 00:10:51.027 "data_size": 65536 00:10:51.027 }, 00:10:51.027 { 00:10:51.027 "name": "BaseBdev3", 00:10:51.027 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:51.027 "is_configured": true, 00:10:51.027 "data_offset": 0, 00:10:51.027 "data_size": 65536 00:10:51.027 }, 00:10:51.027 { 00:10:51.027 "name": "BaseBdev4", 00:10:51.027 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:51.027 "is_configured": true, 00:10:51.027 "data_offset": 0, 00:10:51.027 "data_size": 65536 00:10:51.027 } 00:10:51.027 ] 00:10:51.027 }' 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.027 09:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e859137-c239-4db7-9ef5-278e41e97caf 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.285 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.543 [2024-12-12 09:24:25.351642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:51.543 [2024-12-12 09:24:25.351754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.543 [2024-12-12 09:24:25.351778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:51.543 [2024-12-12 09:24:25.352171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:51.543 [2024-12-12 09:24:25.352390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.543 [2024-12-12 09:24:25.352434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:51.543 [2024-12-12 09:24:25.352751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.543 NewBaseBdev 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.543 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.543 [ 00:10:51.543 { 00:10:51.543 "name": "NewBaseBdev", 00:10:51.543 "aliases": [ 00:10:51.543 "2e859137-c239-4db7-9ef5-278e41e97caf" 00:10:51.543 ], 00:10:51.543 "product_name": "Malloc disk", 00:10:51.543 "block_size": 512, 00:10:51.543 "num_blocks": 65536, 00:10:51.543 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:51.543 "assigned_rate_limits": { 00:10:51.543 "rw_ios_per_sec": 0, 00:10:51.543 "rw_mbytes_per_sec": 0, 00:10:51.543 "r_mbytes_per_sec": 0, 00:10:51.543 "w_mbytes_per_sec": 0 00:10:51.543 }, 00:10:51.543 "claimed": true, 00:10:51.544 "claim_type": "exclusive_write", 00:10:51.544 "zoned": false, 00:10:51.544 "supported_io_types": { 00:10:51.544 "read": true, 00:10:51.544 "write": true, 00:10:51.544 "unmap": true, 00:10:51.544 "flush": true, 00:10:51.544 "reset": true, 00:10:51.544 "nvme_admin": false, 00:10:51.544 "nvme_io": false, 00:10:51.544 "nvme_io_md": false, 00:10:51.544 "write_zeroes": true, 00:10:51.544 "zcopy": true, 00:10:51.544 "get_zone_info": false, 00:10:51.544 "zone_management": false, 00:10:51.544 "zone_append": false, 00:10:51.544 "compare": false, 00:10:51.544 "compare_and_write": false, 00:10:51.544 "abort": true, 00:10:51.544 "seek_hole": false, 00:10:51.544 "seek_data": false, 00:10:51.544 "copy": true, 00:10:51.544 "nvme_iov_md": false 00:10:51.544 }, 00:10:51.544 "memory_domains": [ 00:10:51.544 { 00:10:51.544 "dma_device_id": "system", 00:10:51.544 "dma_device_type": 1 00:10:51.544 }, 00:10:51.544 { 00:10:51.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.544 "dma_device_type": 2 00:10:51.544 } 00:10:51.544 ], 00:10:51.544 "driver_specific": {} 00:10:51.544 } 00:10:51.544 ] 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.544 "name": "Existed_Raid", 00:10:51.544 "uuid": "f4a7ea23-9db7-4b9a-912e-06f6fcce4f57", 00:10:51.544 "strip_size_kb": 64, 00:10:51.544 "state": "online", 00:10:51.544 "raid_level": "concat", 00:10:51.544 "superblock": false, 00:10:51.544 "num_base_bdevs": 4, 00:10:51.544 "num_base_bdevs_discovered": 4, 00:10:51.544 "num_base_bdevs_operational": 4, 00:10:51.544 "base_bdevs_list": [ 00:10:51.544 { 00:10:51.544 "name": "NewBaseBdev", 00:10:51.544 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:51.544 "is_configured": true, 00:10:51.544 "data_offset": 0, 00:10:51.544 "data_size": 65536 00:10:51.544 }, 00:10:51.544 { 00:10:51.544 "name": "BaseBdev2", 00:10:51.544 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:51.544 "is_configured": true, 00:10:51.544 "data_offset": 0, 00:10:51.544 "data_size": 65536 00:10:51.544 }, 00:10:51.544 { 00:10:51.544 "name": "BaseBdev3", 00:10:51.544 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:51.544 "is_configured": true, 00:10:51.544 "data_offset": 0, 00:10:51.544 "data_size": 65536 00:10:51.544 }, 00:10:51.544 { 00:10:51.544 "name": "BaseBdev4", 00:10:51.544 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:51.544 "is_configured": true, 00:10:51.544 "data_offset": 0, 00:10:51.544 "data_size": 65536 00:10:51.544 } 00:10:51.544 ] 00:10:51.544 }' 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.544 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.803 [2024-12-12 09:24:25.791400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.803 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.803 "name": "Existed_Raid", 00:10:51.803 "aliases": [ 00:10:51.803 "f4a7ea23-9db7-4b9a-912e-06f6fcce4f57" 00:10:51.803 ], 00:10:51.803 "product_name": "Raid Volume", 00:10:51.804 "block_size": 512, 00:10:51.804 "num_blocks": 262144, 00:10:51.804 "uuid": "f4a7ea23-9db7-4b9a-912e-06f6fcce4f57", 00:10:51.804 "assigned_rate_limits": { 00:10:51.804 "rw_ios_per_sec": 0, 00:10:51.804 "rw_mbytes_per_sec": 0, 00:10:51.804 "r_mbytes_per_sec": 0, 00:10:51.804 "w_mbytes_per_sec": 0 00:10:51.804 }, 00:10:51.804 "claimed": false, 00:10:51.804 "zoned": false, 00:10:51.804 "supported_io_types": { 00:10:51.804 "read": true, 00:10:51.804 "write": true, 00:10:51.804 "unmap": true, 00:10:51.804 "flush": true, 00:10:51.804 "reset": true, 00:10:51.804 "nvme_admin": false, 00:10:51.804 "nvme_io": false, 00:10:51.804 "nvme_io_md": false, 00:10:51.804 "write_zeroes": true, 00:10:51.804 "zcopy": false, 00:10:51.804 "get_zone_info": false, 00:10:51.804 "zone_management": false, 00:10:51.804 "zone_append": false, 00:10:51.804 "compare": false, 00:10:51.804 "compare_and_write": false, 00:10:51.804 "abort": false, 00:10:51.804 "seek_hole": false, 00:10:51.804 "seek_data": false, 00:10:51.804 "copy": false, 00:10:51.804 "nvme_iov_md": false 00:10:51.804 }, 00:10:51.804 "memory_domains": [ 00:10:51.804 { 00:10:51.804 "dma_device_id": "system", 00:10:51.804 "dma_device_type": 1 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.804 "dma_device_type": 2 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "system", 00:10:51.804 "dma_device_type": 1 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.804 "dma_device_type": 2 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "system", 00:10:51.804 "dma_device_type": 1 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.804 "dma_device_type": 2 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "system", 00:10:51.804 "dma_device_type": 1 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.804 "dma_device_type": 2 00:10:51.804 } 00:10:51.804 ], 00:10:51.804 "driver_specific": { 00:10:51.804 "raid": { 00:10:51.804 "uuid": "f4a7ea23-9db7-4b9a-912e-06f6fcce4f57", 00:10:51.804 "strip_size_kb": 64, 00:10:51.804 "state": "online", 00:10:51.804 "raid_level": "concat", 00:10:51.804 "superblock": false, 00:10:51.804 "num_base_bdevs": 4, 00:10:51.804 "num_base_bdevs_discovered": 4, 00:10:51.804 "num_base_bdevs_operational": 4, 00:10:51.804 "base_bdevs_list": [ 00:10:51.804 { 00:10:51.804 "name": "NewBaseBdev", 00:10:51.804 "uuid": "2e859137-c239-4db7-9ef5-278e41e97caf", 00:10:51.804 "is_configured": true, 00:10:51.804 "data_offset": 0, 00:10:51.804 "data_size": 65536 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "name": "BaseBdev2", 00:10:51.804 "uuid": "91cd9173-30ef-4bda-9756-b387ffd0c281", 00:10:51.804 "is_configured": true, 00:10:51.804 "data_offset": 0, 00:10:51.804 "data_size": 65536 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "name": "BaseBdev3", 00:10:51.804 "uuid": "e7cb9587-c12f-4ebd-82c9-cd012a6654cd", 00:10:51.804 "is_configured": true, 00:10:51.804 "data_offset": 0, 00:10:51.804 "data_size": 65536 00:10:51.804 }, 00:10:51.804 { 00:10:51.804 "name": "BaseBdev4", 00:10:51.804 "uuid": "b27841de-6f13-4ad9-a94d-86084310a562", 00:10:51.804 "is_configured": true, 00:10:51.804 "data_offset": 0, 00:10:51.804 "data_size": 65536 00:10:51.804 } 00:10:51.804 ] 00:10:51.804 } 00:10:51.804 } 00:10:51.804 }' 00:10:51.804 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:52.063 BaseBdev2 00:10:52.063 BaseBdev3 00:10:52.063 BaseBdev4' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 09:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.323 [2024-12-12 09:24:26.086410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.323 [2024-12-12 09:24:26.086484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.323 [2024-12-12 09:24:26.086598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.323 [2024-12-12 09:24:26.086681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.323 [2024-12-12 09:24:26.086692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72419 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72419 ']' 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72419 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72419 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.323 killing process with pid 72419 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72419' 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72419 00:10:52.323 [2024-12-12 09:24:26.130175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.323 09:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72419 00:10:52.582 [2024-12-12 09:24:26.537247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.960 00:10:53.960 real 0m11.526s 00:10:53.960 user 0m18.041s 00:10:53.960 sys 0m2.199s 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.960 ************************************ 00:10:53.960 END TEST raid_state_function_test 00:10:53.960 ************************************ 00:10:53.960 09:24:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:53.960 09:24:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.960 09:24:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.960 09:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.960 ************************************ 00:10:53.960 START TEST raid_state_function_test_sb 00:10:53.960 ************************************ 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:53.960 Process raid pid: 73087 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73087 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73087' 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73087 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73087 ']' 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.960 09:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.960 [2024-12-12 09:24:27.893757] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:53.960 [2024-12-12 09:24:27.893938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.219 [2024-12-12 09:24:28.066837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.219 [2024-12-12 09:24:28.204831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.477 [2024-12-12 09:24:28.439599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.477 [2024-12-12 09:24:28.439750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.739 [2024-12-12 09:24:28.723444] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.739 [2024-12-12 09:24:28.723503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.739 [2024-12-12 09:24:28.723514] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.739 [2024-12-12 09:24:28.723524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.739 [2024-12-12 09:24:28.723536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.739 [2024-12-12 09:24:28.723545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.739 [2024-12-12 09:24:28.723551] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.739 [2024-12-12 09:24:28.723566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.739 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.001 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.001 "name": "Existed_Raid", 00:10:55.001 "uuid": "d454eca0-17fd-4785-83e6-ed88982c733a", 00:10:55.001 "strip_size_kb": 64, 00:10:55.001 "state": "configuring", 00:10:55.001 "raid_level": "concat", 00:10:55.001 "superblock": true, 00:10:55.001 "num_base_bdevs": 4, 00:10:55.001 "num_base_bdevs_discovered": 0, 00:10:55.001 "num_base_bdevs_operational": 4, 00:10:55.001 "base_bdevs_list": [ 00:10:55.001 { 00:10:55.001 "name": "BaseBdev1", 00:10:55.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.001 "is_configured": false, 00:10:55.001 "data_offset": 0, 00:10:55.001 "data_size": 0 00:10:55.001 }, 00:10:55.001 { 00:10:55.001 "name": "BaseBdev2", 00:10:55.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.001 "is_configured": false, 00:10:55.001 "data_offset": 0, 00:10:55.001 "data_size": 0 00:10:55.001 }, 00:10:55.001 { 00:10:55.001 "name": "BaseBdev3", 00:10:55.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.001 "is_configured": false, 00:10:55.001 "data_offset": 0, 00:10:55.001 "data_size": 0 00:10:55.001 }, 00:10:55.001 { 00:10:55.001 "name": "BaseBdev4", 00:10:55.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.001 "is_configured": false, 00:10:55.001 "data_offset": 0, 00:10:55.001 "data_size": 0 00:10:55.001 } 00:10:55.001 ] 00:10:55.001 }' 00:10:55.001 09:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.001 09:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 [2024-12-12 09:24:29.166665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.261 [2024-12-12 09:24:29.166767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 [2024-12-12 09:24:29.178655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.261 [2024-12-12 09:24:29.178760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.261 [2024-12-12 09:24:29.178789] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.261 [2024-12-12 09:24:29.178812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.261 [2024-12-12 09:24:29.178830] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.261 [2024-12-12 09:24:29.178850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.261 [2024-12-12 09:24:29.178868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.261 [2024-12-12 09:24:29.178889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 [2024-12-12 09:24:29.234423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.261 BaseBdev1 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.261 [ 00:10:55.261 { 00:10:55.261 "name": "BaseBdev1", 00:10:55.261 "aliases": [ 00:10:55.261 "e4cf5efe-a5c1-49c9-aba0-d6894f24577e" 00:10:55.261 ], 00:10:55.261 "product_name": "Malloc disk", 00:10:55.261 "block_size": 512, 00:10:55.261 "num_blocks": 65536, 00:10:55.261 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:55.261 "assigned_rate_limits": { 00:10:55.261 "rw_ios_per_sec": 0, 00:10:55.261 "rw_mbytes_per_sec": 0, 00:10:55.261 "r_mbytes_per_sec": 0, 00:10:55.261 "w_mbytes_per_sec": 0 00:10:55.261 }, 00:10:55.261 "claimed": true, 00:10:55.261 "claim_type": "exclusive_write", 00:10:55.261 "zoned": false, 00:10:55.261 "supported_io_types": { 00:10:55.261 "read": true, 00:10:55.261 "write": true, 00:10:55.261 "unmap": true, 00:10:55.261 "flush": true, 00:10:55.261 "reset": true, 00:10:55.261 "nvme_admin": false, 00:10:55.261 "nvme_io": false, 00:10:55.261 "nvme_io_md": false, 00:10:55.261 "write_zeroes": true, 00:10:55.261 "zcopy": true, 00:10:55.261 "get_zone_info": false, 00:10:55.261 "zone_management": false, 00:10:55.261 "zone_append": false, 00:10:55.261 "compare": false, 00:10:55.261 "compare_and_write": false, 00:10:55.261 "abort": true, 00:10:55.261 "seek_hole": false, 00:10:55.261 "seek_data": false, 00:10:55.261 "copy": true, 00:10:55.261 "nvme_iov_md": false 00:10:55.261 }, 00:10:55.261 "memory_domains": [ 00:10:55.261 { 00:10:55.261 "dma_device_id": "system", 00:10:55.261 "dma_device_type": 1 00:10:55.261 }, 00:10:55.261 { 00:10:55.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.261 "dma_device_type": 2 00:10:55.261 } 00:10:55.261 ], 00:10:55.261 "driver_specific": {} 00:10:55.261 } 00:10:55.261 ] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.261 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.521 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.521 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.521 "name": "Existed_Raid", 00:10:55.521 "uuid": "648deb72-e0dd-4151-b601-607f28717782", 00:10:55.521 "strip_size_kb": 64, 00:10:55.521 "state": "configuring", 00:10:55.521 "raid_level": "concat", 00:10:55.521 "superblock": true, 00:10:55.521 "num_base_bdevs": 4, 00:10:55.521 "num_base_bdevs_discovered": 1, 00:10:55.521 "num_base_bdevs_operational": 4, 00:10:55.521 "base_bdevs_list": [ 00:10:55.521 { 00:10:55.521 "name": "BaseBdev1", 00:10:55.521 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:55.521 "is_configured": true, 00:10:55.521 "data_offset": 2048, 00:10:55.521 "data_size": 63488 00:10:55.521 }, 00:10:55.521 { 00:10:55.521 "name": "BaseBdev2", 00:10:55.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.521 "is_configured": false, 00:10:55.521 "data_offset": 0, 00:10:55.521 "data_size": 0 00:10:55.521 }, 00:10:55.521 { 00:10:55.521 "name": "BaseBdev3", 00:10:55.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.521 "is_configured": false, 00:10:55.521 "data_offset": 0, 00:10:55.521 "data_size": 0 00:10:55.521 }, 00:10:55.521 { 00:10:55.521 "name": "BaseBdev4", 00:10:55.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.521 "is_configured": false, 00:10:55.521 "data_offset": 0, 00:10:55.521 "data_size": 0 00:10:55.521 } 00:10:55.521 ] 00:10:55.521 }' 00:10:55.521 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.521 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.780 [2024-12-12 09:24:29.709641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.780 [2024-12-12 09:24:29.709780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.780 [2024-12-12 09:24:29.721688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.780 [2024-12-12 09:24:29.723850] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.780 [2024-12-12 09:24:29.723964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.780 [2024-12-12 09:24:29.723998] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.780 [2024-12-12 09:24:29.724025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.780 [2024-12-12 09:24:29.724044] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.780 [2024-12-12 09:24:29.724069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.780 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.781 "name": "Existed_Raid", 00:10:55.781 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:55.781 "strip_size_kb": 64, 00:10:55.781 "state": "configuring", 00:10:55.781 "raid_level": "concat", 00:10:55.781 "superblock": true, 00:10:55.781 "num_base_bdevs": 4, 00:10:55.781 "num_base_bdevs_discovered": 1, 00:10:55.781 "num_base_bdevs_operational": 4, 00:10:55.781 "base_bdevs_list": [ 00:10:55.781 { 00:10:55.781 "name": "BaseBdev1", 00:10:55.781 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:55.781 "is_configured": true, 00:10:55.781 "data_offset": 2048, 00:10:55.781 "data_size": 63488 00:10:55.781 }, 00:10:55.781 { 00:10:55.781 "name": "BaseBdev2", 00:10:55.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.781 "is_configured": false, 00:10:55.781 "data_offset": 0, 00:10:55.781 "data_size": 0 00:10:55.781 }, 00:10:55.781 { 00:10:55.781 "name": "BaseBdev3", 00:10:55.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.781 "is_configured": false, 00:10:55.781 "data_offset": 0, 00:10:55.781 "data_size": 0 00:10:55.781 }, 00:10:55.781 { 00:10:55.781 "name": "BaseBdev4", 00:10:55.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.781 "is_configured": false, 00:10:55.781 "data_offset": 0, 00:10:55.781 "data_size": 0 00:10:55.781 } 00:10:55.781 ] 00:10:55.781 }' 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.781 09:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 [2024-12-12 09:24:30.245669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.349 BaseBdev2 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.350 [ 00:10:56.350 { 00:10:56.350 "name": "BaseBdev2", 00:10:56.350 "aliases": [ 00:10:56.350 "301f239d-003d-4bef-9a86-7f5094b001e7" 00:10:56.350 ], 00:10:56.350 "product_name": "Malloc disk", 00:10:56.350 "block_size": 512, 00:10:56.350 "num_blocks": 65536, 00:10:56.350 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:56.350 "assigned_rate_limits": { 00:10:56.350 "rw_ios_per_sec": 0, 00:10:56.350 "rw_mbytes_per_sec": 0, 00:10:56.350 "r_mbytes_per_sec": 0, 00:10:56.350 "w_mbytes_per_sec": 0 00:10:56.350 }, 00:10:56.350 "claimed": true, 00:10:56.350 "claim_type": "exclusive_write", 00:10:56.350 "zoned": false, 00:10:56.350 "supported_io_types": { 00:10:56.350 "read": true, 00:10:56.350 "write": true, 00:10:56.350 "unmap": true, 00:10:56.350 "flush": true, 00:10:56.350 "reset": true, 00:10:56.350 "nvme_admin": false, 00:10:56.350 "nvme_io": false, 00:10:56.350 "nvme_io_md": false, 00:10:56.350 "write_zeroes": true, 00:10:56.350 "zcopy": true, 00:10:56.350 "get_zone_info": false, 00:10:56.350 "zone_management": false, 00:10:56.350 "zone_append": false, 00:10:56.350 "compare": false, 00:10:56.350 "compare_and_write": false, 00:10:56.350 "abort": true, 00:10:56.350 "seek_hole": false, 00:10:56.350 "seek_data": false, 00:10:56.350 "copy": true, 00:10:56.350 "nvme_iov_md": false 00:10:56.350 }, 00:10:56.350 "memory_domains": [ 00:10:56.350 { 00:10:56.350 "dma_device_id": "system", 00:10:56.350 "dma_device_type": 1 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.350 "dma_device_type": 2 00:10:56.350 } 00:10:56.350 ], 00:10:56.350 "driver_specific": {} 00:10:56.350 } 00:10:56.350 ] 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.350 "name": "Existed_Raid", 00:10:56.350 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:56.350 "strip_size_kb": 64, 00:10:56.350 "state": "configuring", 00:10:56.350 "raid_level": "concat", 00:10:56.350 "superblock": true, 00:10:56.350 "num_base_bdevs": 4, 00:10:56.350 "num_base_bdevs_discovered": 2, 00:10:56.350 "num_base_bdevs_operational": 4, 00:10:56.350 "base_bdevs_list": [ 00:10:56.350 { 00:10:56.350 "name": "BaseBdev1", 00:10:56.350 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:56.350 "is_configured": true, 00:10:56.350 "data_offset": 2048, 00:10:56.350 "data_size": 63488 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev2", 00:10:56.350 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:56.350 "is_configured": true, 00:10:56.350 "data_offset": 2048, 00:10:56.350 "data_size": 63488 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev3", 00:10:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.350 "is_configured": false, 00:10:56.350 "data_offset": 0, 00:10:56.350 "data_size": 0 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev4", 00:10:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.350 "is_configured": false, 00:10:56.350 "data_offset": 0, 00:10:56.350 "data_size": 0 00:10:56.350 } 00:10:56.350 ] 00:10:56.350 }' 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.350 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 [2024-12-12 09:24:30.808156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.919 BaseBdev3 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 [ 00:10:56.919 { 00:10:56.919 "name": "BaseBdev3", 00:10:56.919 "aliases": [ 00:10:56.919 "4de41909-234b-4a00-8db0-770aa8760f52" 00:10:56.919 ], 00:10:56.919 "product_name": "Malloc disk", 00:10:56.919 "block_size": 512, 00:10:56.919 "num_blocks": 65536, 00:10:56.919 "uuid": "4de41909-234b-4a00-8db0-770aa8760f52", 00:10:56.919 "assigned_rate_limits": { 00:10:56.919 "rw_ios_per_sec": 0, 00:10:56.919 "rw_mbytes_per_sec": 0, 00:10:56.919 "r_mbytes_per_sec": 0, 00:10:56.919 "w_mbytes_per_sec": 0 00:10:56.919 }, 00:10:56.919 "claimed": true, 00:10:56.919 "claim_type": "exclusive_write", 00:10:56.919 "zoned": false, 00:10:56.919 "supported_io_types": { 00:10:56.919 "read": true, 00:10:56.919 "write": true, 00:10:56.919 "unmap": true, 00:10:56.919 "flush": true, 00:10:56.919 "reset": true, 00:10:56.919 "nvme_admin": false, 00:10:56.919 "nvme_io": false, 00:10:56.919 "nvme_io_md": false, 00:10:56.919 "write_zeroes": true, 00:10:56.919 "zcopy": true, 00:10:56.919 "get_zone_info": false, 00:10:56.919 "zone_management": false, 00:10:56.919 "zone_append": false, 00:10:56.919 "compare": false, 00:10:56.919 "compare_and_write": false, 00:10:56.919 "abort": true, 00:10:56.919 "seek_hole": false, 00:10:56.919 "seek_data": false, 00:10:56.919 "copy": true, 00:10:56.919 "nvme_iov_md": false 00:10:56.919 }, 00:10:56.919 "memory_domains": [ 00:10:56.919 { 00:10:56.919 "dma_device_id": "system", 00:10:56.919 "dma_device_type": 1 00:10:56.919 }, 00:10:56.919 { 00:10:56.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.919 "dma_device_type": 2 00:10:56.919 } 00:10:56.919 ], 00:10:56.919 "driver_specific": {} 00:10:56.919 } 00:10:56.919 ] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.919 "name": "Existed_Raid", 00:10:56.919 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:56.919 "strip_size_kb": 64, 00:10:56.919 "state": "configuring", 00:10:56.919 "raid_level": "concat", 00:10:56.919 "superblock": true, 00:10:56.919 "num_base_bdevs": 4, 00:10:56.919 "num_base_bdevs_discovered": 3, 00:10:56.919 "num_base_bdevs_operational": 4, 00:10:56.919 "base_bdevs_list": [ 00:10:56.919 { 00:10:56.919 "name": "BaseBdev1", 00:10:56.919 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:56.919 "is_configured": true, 00:10:56.919 "data_offset": 2048, 00:10:56.919 "data_size": 63488 00:10:56.919 }, 00:10:56.919 { 00:10:56.919 "name": "BaseBdev2", 00:10:56.919 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:56.919 "is_configured": true, 00:10:56.919 "data_offset": 2048, 00:10:56.919 "data_size": 63488 00:10:56.919 }, 00:10:56.919 { 00:10:56.919 "name": "BaseBdev3", 00:10:56.919 "uuid": "4de41909-234b-4a00-8db0-770aa8760f52", 00:10:56.919 "is_configured": true, 00:10:56.919 "data_offset": 2048, 00:10:56.919 "data_size": 63488 00:10:56.919 }, 00:10:56.919 { 00:10:56.919 "name": "BaseBdev4", 00:10:56.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.919 "is_configured": false, 00:10:56.919 "data_offset": 0, 00:10:56.919 "data_size": 0 00:10:56.919 } 00:10:56.919 ] 00:10:56.919 }' 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.919 09:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.486 [2024-12-12 09:24:31.337757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.486 [2024-12-12 09:24:31.338116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.486 [2024-12-12 09:24:31.338135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.486 BaseBdev4 00:10:57.486 [2024-12-12 09:24:31.338452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.486 [2024-12-12 09:24:31.338628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.486 [2024-12-12 09:24:31.338641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.486 [2024-12-12 09:24:31.338803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.486 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.487 [ 00:10:57.487 { 00:10:57.487 "name": "BaseBdev4", 00:10:57.487 "aliases": [ 00:10:57.487 "d21904c8-02ea-4da0-b3a9-1f99d4269b59" 00:10:57.487 ], 00:10:57.487 "product_name": "Malloc disk", 00:10:57.487 "block_size": 512, 00:10:57.487 "num_blocks": 65536, 00:10:57.487 "uuid": "d21904c8-02ea-4da0-b3a9-1f99d4269b59", 00:10:57.487 "assigned_rate_limits": { 00:10:57.487 "rw_ios_per_sec": 0, 00:10:57.487 "rw_mbytes_per_sec": 0, 00:10:57.487 "r_mbytes_per_sec": 0, 00:10:57.487 "w_mbytes_per_sec": 0 00:10:57.487 }, 00:10:57.487 "claimed": true, 00:10:57.487 "claim_type": "exclusive_write", 00:10:57.487 "zoned": false, 00:10:57.487 "supported_io_types": { 00:10:57.487 "read": true, 00:10:57.487 "write": true, 00:10:57.487 "unmap": true, 00:10:57.487 "flush": true, 00:10:57.487 "reset": true, 00:10:57.487 "nvme_admin": false, 00:10:57.487 "nvme_io": false, 00:10:57.487 "nvme_io_md": false, 00:10:57.487 "write_zeroes": true, 00:10:57.487 "zcopy": true, 00:10:57.487 "get_zone_info": false, 00:10:57.487 "zone_management": false, 00:10:57.487 "zone_append": false, 00:10:57.487 "compare": false, 00:10:57.487 "compare_and_write": false, 00:10:57.487 "abort": true, 00:10:57.487 "seek_hole": false, 00:10:57.487 "seek_data": false, 00:10:57.487 "copy": true, 00:10:57.487 "nvme_iov_md": false 00:10:57.487 }, 00:10:57.487 "memory_domains": [ 00:10:57.487 { 00:10:57.487 "dma_device_id": "system", 00:10:57.487 "dma_device_type": 1 00:10:57.487 }, 00:10:57.487 { 00:10:57.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.487 "dma_device_type": 2 00:10:57.487 } 00:10:57.487 ], 00:10:57.487 "driver_specific": {} 00:10:57.487 } 00:10:57.487 ] 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.487 "name": "Existed_Raid", 00:10:57.487 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:57.487 "strip_size_kb": 64, 00:10:57.487 "state": "online", 00:10:57.487 "raid_level": "concat", 00:10:57.487 "superblock": true, 00:10:57.487 "num_base_bdevs": 4, 00:10:57.487 "num_base_bdevs_discovered": 4, 00:10:57.487 "num_base_bdevs_operational": 4, 00:10:57.487 "base_bdevs_list": [ 00:10:57.487 { 00:10:57.487 "name": "BaseBdev1", 00:10:57.487 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:57.487 "is_configured": true, 00:10:57.487 "data_offset": 2048, 00:10:57.487 "data_size": 63488 00:10:57.487 }, 00:10:57.487 { 00:10:57.487 "name": "BaseBdev2", 00:10:57.487 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:57.487 "is_configured": true, 00:10:57.487 "data_offset": 2048, 00:10:57.487 "data_size": 63488 00:10:57.487 }, 00:10:57.487 { 00:10:57.487 "name": "BaseBdev3", 00:10:57.487 "uuid": "4de41909-234b-4a00-8db0-770aa8760f52", 00:10:57.487 "is_configured": true, 00:10:57.487 "data_offset": 2048, 00:10:57.487 "data_size": 63488 00:10:57.487 }, 00:10:57.487 { 00:10:57.487 "name": "BaseBdev4", 00:10:57.487 "uuid": "d21904c8-02ea-4da0-b3a9-1f99d4269b59", 00:10:57.487 "is_configured": true, 00:10:57.487 "data_offset": 2048, 00:10:57.487 "data_size": 63488 00:10:57.487 } 00:10:57.487 ] 00:10:57.487 }' 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.487 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.056 [2024-12-12 09:24:31.845261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.056 "name": "Existed_Raid", 00:10:58.056 "aliases": [ 00:10:58.056 "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f" 00:10:58.056 ], 00:10:58.056 "product_name": "Raid Volume", 00:10:58.056 "block_size": 512, 00:10:58.056 "num_blocks": 253952, 00:10:58.056 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:58.056 "assigned_rate_limits": { 00:10:58.056 "rw_ios_per_sec": 0, 00:10:58.056 "rw_mbytes_per_sec": 0, 00:10:58.056 "r_mbytes_per_sec": 0, 00:10:58.056 "w_mbytes_per_sec": 0 00:10:58.056 }, 00:10:58.056 "claimed": false, 00:10:58.056 "zoned": false, 00:10:58.056 "supported_io_types": { 00:10:58.056 "read": true, 00:10:58.056 "write": true, 00:10:58.056 "unmap": true, 00:10:58.056 "flush": true, 00:10:58.056 "reset": true, 00:10:58.056 "nvme_admin": false, 00:10:58.056 "nvme_io": false, 00:10:58.056 "nvme_io_md": false, 00:10:58.056 "write_zeroes": true, 00:10:58.056 "zcopy": false, 00:10:58.056 "get_zone_info": false, 00:10:58.056 "zone_management": false, 00:10:58.056 "zone_append": false, 00:10:58.056 "compare": false, 00:10:58.056 "compare_and_write": false, 00:10:58.056 "abort": false, 00:10:58.056 "seek_hole": false, 00:10:58.056 "seek_data": false, 00:10:58.056 "copy": false, 00:10:58.056 "nvme_iov_md": false 00:10:58.056 }, 00:10:58.056 "memory_domains": [ 00:10:58.056 { 00:10:58.056 "dma_device_id": "system", 00:10:58.056 "dma_device_type": 1 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.056 "dma_device_type": 2 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "system", 00:10:58.056 "dma_device_type": 1 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.056 "dma_device_type": 2 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "system", 00:10:58.056 "dma_device_type": 1 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.056 "dma_device_type": 2 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "system", 00:10:58.056 "dma_device_type": 1 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.056 "dma_device_type": 2 00:10:58.056 } 00:10:58.056 ], 00:10:58.056 "driver_specific": { 00:10:58.056 "raid": { 00:10:58.056 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:58.056 "strip_size_kb": 64, 00:10:58.056 "state": "online", 00:10:58.056 "raid_level": "concat", 00:10:58.056 "superblock": true, 00:10:58.056 "num_base_bdevs": 4, 00:10:58.056 "num_base_bdevs_discovered": 4, 00:10:58.056 "num_base_bdevs_operational": 4, 00:10:58.056 "base_bdevs_list": [ 00:10:58.056 { 00:10:58.056 "name": "BaseBdev1", 00:10:58.056 "uuid": "e4cf5efe-a5c1-49c9-aba0-d6894f24577e", 00:10:58.056 "is_configured": true, 00:10:58.056 "data_offset": 2048, 00:10:58.056 "data_size": 63488 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "name": "BaseBdev2", 00:10:58.056 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:58.056 "is_configured": true, 00:10:58.056 "data_offset": 2048, 00:10:58.056 "data_size": 63488 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "name": "BaseBdev3", 00:10:58.056 "uuid": "4de41909-234b-4a00-8db0-770aa8760f52", 00:10:58.056 "is_configured": true, 00:10:58.056 "data_offset": 2048, 00:10:58.056 "data_size": 63488 00:10:58.056 }, 00:10:58.056 { 00:10:58.056 "name": "BaseBdev4", 00:10:58.056 "uuid": "d21904c8-02ea-4da0-b3a9-1f99d4269b59", 00:10:58.056 "is_configured": true, 00:10:58.056 "data_offset": 2048, 00:10:58.056 "data_size": 63488 00:10:58.056 } 00:10:58.056 ] 00:10:58.056 } 00:10:58.056 } 00:10:58.056 }' 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:58.056 BaseBdev2 00:10:58.056 BaseBdev3 00:10:58.056 BaseBdev4' 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 09:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.316 [2024-12-12 09:24:32.200388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.316 [2024-12-12 09:24:32.200462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.316 [2024-12-12 09:24:32.200532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.316 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.576 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.576 "name": "Existed_Raid", 00:10:58.576 "uuid": "5c1dbb2b-86d9-4714-82e8-c691d8e9a62f", 00:10:58.576 "strip_size_kb": 64, 00:10:58.576 "state": "offline", 00:10:58.576 "raid_level": "concat", 00:10:58.576 "superblock": true, 00:10:58.576 "num_base_bdevs": 4, 00:10:58.576 "num_base_bdevs_discovered": 3, 00:10:58.576 "num_base_bdevs_operational": 3, 00:10:58.576 "base_bdevs_list": [ 00:10:58.576 { 00:10:58.576 "name": null, 00:10:58.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.576 "is_configured": false, 00:10:58.576 "data_offset": 0, 00:10:58.576 "data_size": 63488 00:10:58.576 }, 00:10:58.576 { 00:10:58.576 "name": "BaseBdev2", 00:10:58.576 "uuid": "301f239d-003d-4bef-9a86-7f5094b001e7", 00:10:58.576 "is_configured": true, 00:10:58.576 "data_offset": 2048, 00:10:58.576 "data_size": 63488 00:10:58.576 }, 00:10:58.576 { 00:10:58.576 "name": "BaseBdev3", 00:10:58.576 "uuid": "4de41909-234b-4a00-8db0-770aa8760f52", 00:10:58.576 "is_configured": true, 00:10:58.576 "data_offset": 2048, 00:10:58.576 "data_size": 63488 00:10:58.576 }, 00:10:58.576 { 00:10:58.576 "name": "BaseBdev4", 00:10:58.576 "uuid": "d21904c8-02ea-4da0-b3a9-1f99d4269b59", 00:10:58.576 "is_configured": true, 00:10:58.576 "data_offset": 2048, 00:10:58.576 "data_size": 63488 00:10:58.576 } 00:10:58.576 ] 00:10:58.576 }' 00:10:58.576 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.576 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.835 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.835 [2024-12-12 09:24:32.777640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.094 09:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.094 [2024-12-12 09:24:32.934502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.094 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.094 [2024-12-12 09:24:33.077300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:59.094 [2024-12-12 09:24:33.077359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.354 BaseBdev2 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.354 [ 00:10:59.354 { 00:10:59.354 "name": "BaseBdev2", 00:10:59.354 "aliases": [ 00:10:59.354 "3b3a1e9c-128e-4136-b49f-d73af229f874" 00:10:59.354 ], 00:10:59.354 "product_name": "Malloc disk", 00:10:59.354 "block_size": 512, 00:10:59.354 "num_blocks": 65536, 00:10:59.354 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:10:59.354 "assigned_rate_limits": { 00:10:59.354 "rw_ios_per_sec": 0, 00:10:59.354 "rw_mbytes_per_sec": 0, 00:10:59.354 "r_mbytes_per_sec": 0, 00:10:59.354 "w_mbytes_per_sec": 0 00:10:59.354 }, 00:10:59.354 "claimed": false, 00:10:59.354 "zoned": false, 00:10:59.354 "supported_io_types": { 00:10:59.354 "read": true, 00:10:59.354 "write": true, 00:10:59.354 "unmap": true, 00:10:59.354 "flush": true, 00:10:59.354 "reset": true, 00:10:59.354 "nvme_admin": false, 00:10:59.354 "nvme_io": false, 00:10:59.354 "nvme_io_md": false, 00:10:59.354 "write_zeroes": true, 00:10:59.354 "zcopy": true, 00:10:59.354 "get_zone_info": false, 00:10:59.354 "zone_management": false, 00:10:59.354 "zone_append": false, 00:10:59.354 "compare": false, 00:10:59.354 "compare_and_write": false, 00:10:59.354 "abort": true, 00:10:59.354 "seek_hole": false, 00:10:59.354 "seek_data": false, 00:10:59.354 "copy": true, 00:10:59.354 "nvme_iov_md": false 00:10:59.354 }, 00:10:59.354 "memory_domains": [ 00:10:59.354 { 00:10:59.354 "dma_device_id": "system", 00:10:59.354 "dma_device_type": 1 00:10:59.354 }, 00:10:59.354 { 00:10:59.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.354 "dma_device_type": 2 00:10:59.354 } 00:10:59.354 ], 00:10:59.354 "driver_specific": {} 00:10:59.354 } 00:10:59.354 ] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.354 BaseBdev3 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.354 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.355 [ 00:10:59.355 { 00:10:59.355 "name": "BaseBdev3", 00:10:59.355 "aliases": [ 00:10:59.355 "18569586-1b99-4b76-890e-8ba18793d28e" 00:10:59.355 ], 00:10:59.355 "product_name": "Malloc disk", 00:10:59.355 "block_size": 512, 00:10:59.355 "num_blocks": 65536, 00:10:59.355 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:10:59.355 "assigned_rate_limits": { 00:10:59.355 "rw_ios_per_sec": 0, 00:10:59.355 "rw_mbytes_per_sec": 0, 00:10:59.355 "r_mbytes_per_sec": 0, 00:10:59.355 "w_mbytes_per_sec": 0 00:10:59.355 }, 00:10:59.355 "claimed": false, 00:10:59.355 "zoned": false, 00:10:59.355 "supported_io_types": { 00:10:59.355 "read": true, 00:10:59.355 "write": true, 00:10:59.355 "unmap": true, 00:10:59.355 "flush": true, 00:10:59.355 "reset": true, 00:10:59.355 "nvme_admin": false, 00:10:59.355 "nvme_io": false, 00:10:59.355 "nvme_io_md": false, 00:10:59.355 "write_zeroes": true, 00:10:59.355 "zcopy": true, 00:10:59.355 "get_zone_info": false, 00:10:59.355 "zone_management": false, 00:10:59.355 "zone_append": false, 00:10:59.355 "compare": false, 00:10:59.355 "compare_and_write": false, 00:10:59.355 "abort": true, 00:10:59.355 "seek_hole": false, 00:10:59.355 "seek_data": false, 00:10:59.355 "copy": true, 00:10:59.355 "nvme_iov_md": false 00:10:59.355 }, 00:10:59.355 "memory_domains": [ 00:10:59.355 { 00:10:59.355 "dma_device_id": "system", 00:10:59.355 "dma_device_type": 1 00:10:59.355 }, 00:10:59.355 { 00:10:59.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.355 "dma_device_type": 2 00:10:59.355 } 00:10:59.355 ], 00:10:59.355 "driver_specific": {} 00:10:59.355 } 00:10:59.355 ] 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.355 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.614 BaseBdev4 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.614 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.614 [ 00:10:59.614 { 00:10:59.614 "name": "BaseBdev4", 00:10:59.614 "aliases": [ 00:10:59.614 "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a" 00:10:59.614 ], 00:10:59.614 "product_name": "Malloc disk", 00:10:59.614 "block_size": 512, 00:10:59.614 "num_blocks": 65536, 00:10:59.614 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:10:59.614 "assigned_rate_limits": { 00:10:59.614 "rw_ios_per_sec": 0, 00:10:59.614 "rw_mbytes_per_sec": 0, 00:10:59.614 "r_mbytes_per_sec": 0, 00:10:59.614 "w_mbytes_per_sec": 0 00:10:59.614 }, 00:10:59.614 "claimed": false, 00:10:59.614 "zoned": false, 00:10:59.614 "supported_io_types": { 00:10:59.614 "read": true, 00:10:59.614 "write": true, 00:10:59.614 "unmap": true, 00:10:59.614 "flush": true, 00:10:59.614 "reset": true, 00:10:59.614 "nvme_admin": false, 00:10:59.614 "nvme_io": false, 00:10:59.614 "nvme_io_md": false, 00:10:59.614 "write_zeroes": true, 00:10:59.614 "zcopy": true, 00:10:59.614 "get_zone_info": false, 00:10:59.614 "zone_management": false, 00:10:59.614 "zone_append": false, 00:10:59.614 "compare": false, 00:10:59.614 "compare_and_write": false, 00:10:59.614 "abort": true, 00:10:59.614 "seek_hole": false, 00:10:59.614 "seek_data": false, 00:10:59.614 "copy": true, 00:10:59.614 "nvme_iov_md": false 00:10:59.614 }, 00:10:59.614 "memory_domains": [ 00:10:59.614 { 00:10:59.614 "dma_device_id": "system", 00:10:59.614 "dma_device_type": 1 00:10:59.614 }, 00:10:59.614 { 00:10:59.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.615 "dma_device_type": 2 00:10:59.615 } 00:10:59.615 ], 00:10:59.615 "driver_specific": {} 00:10:59.615 } 00:10:59.615 ] 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.615 [2024-12-12 09:24:33.464237] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.615 [2024-12-12 09:24:33.464336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.615 [2024-12-12 09:24:33.464401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.615 [2024-12-12 09:24:33.466678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.615 [2024-12-12 09:24:33.466775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.615 "name": "Existed_Raid", 00:10:59.615 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:10:59.615 "strip_size_kb": 64, 00:10:59.615 "state": "configuring", 00:10:59.615 "raid_level": "concat", 00:10:59.615 "superblock": true, 00:10:59.615 "num_base_bdevs": 4, 00:10:59.615 "num_base_bdevs_discovered": 3, 00:10:59.615 "num_base_bdevs_operational": 4, 00:10:59.615 "base_bdevs_list": [ 00:10:59.615 { 00:10:59.615 "name": "BaseBdev1", 00:10:59.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.615 "is_configured": false, 00:10:59.615 "data_offset": 0, 00:10:59.615 "data_size": 0 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev2", 00:10:59.615 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev3", 00:10:59.615 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev4", 00:10:59.615 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 } 00:10:59.615 ] 00:10:59.615 }' 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.615 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.874 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.874 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.874 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.133 [2024-12-12 09:24:33.903452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.133 "name": "Existed_Raid", 00:11:00.133 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:00.133 "strip_size_kb": 64, 00:11:00.133 "state": "configuring", 00:11:00.133 "raid_level": "concat", 00:11:00.133 "superblock": true, 00:11:00.133 "num_base_bdevs": 4, 00:11:00.133 "num_base_bdevs_discovered": 2, 00:11:00.133 "num_base_bdevs_operational": 4, 00:11:00.133 "base_bdevs_list": [ 00:11:00.133 { 00:11:00.133 "name": "BaseBdev1", 00:11:00.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.133 "is_configured": false, 00:11:00.133 "data_offset": 0, 00:11:00.133 "data_size": 0 00:11:00.133 }, 00:11:00.133 { 00:11:00.133 "name": null, 00:11:00.133 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:00.133 "is_configured": false, 00:11:00.133 "data_offset": 0, 00:11:00.133 "data_size": 63488 00:11:00.133 }, 00:11:00.133 { 00:11:00.133 "name": "BaseBdev3", 00:11:00.133 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:00.133 "is_configured": true, 00:11:00.133 "data_offset": 2048, 00:11:00.133 "data_size": 63488 00:11:00.133 }, 00:11:00.133 { 00:11:00.133 "name": "BaseBdev4", 00:11:00.133 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:00.133 "is_configured": true, 00:11:00.133 "data_offset": 2048, 00:11:00.133 "data_size": 63488 00:11:00.133 } 00:11:00.133 ] 00:11:00.133 }' 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.133 09:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 [2024-12-12 09:24:34.373534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.393 BaseBdev1 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 [ 00:11:00.393 { 00:11:00.393 "name": "BaseBdev1", 00:11:00.393 "aliases": [ 00:11:00.393 "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1" 00:11:00.393 ], 00:11:00.393 "product_name": "Malloc disk", 00:11:00.393 "block_size": 512, 00:11:00.393 "num_blocks": 65536, 00:11:00.393 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:00.393 "assigned_rate_limits": { 00:11:00.393 "rw_ios_per_sec": 0, 00:11:00.393 "rw_mbytes_per_sec": 0, 00:11:00.393 "r_mbytes_per_sec": 0, 00:11:00.393 "w_mbytes_per_sec": 0 00:11:00.393 }, 00:11:00.393 "claimed": true, 00:11:00.393 "claim_type": "exclusive_write", 00:11:00.393 "zoned": false, 00:11:00.393 "supported_io_types": { 00:11:00.393 "read": true, 00:11:00.393 "write": true, 00:11:00.393 "unmap": true, 00:11:00.393 "flush": true, 00:11:00.393 "reset": true, 00:11:00.393 "nvme_admin": false, 00:11:00.393 "nvme_io": false, 00:11:00.393 "nvme_io_md": false, 00:11:00.393 "write_zeroes": true, 00:11:00.393 "zcopy": true, 00:11:00.393 "get_zone_info": false, 00:11:00.393 "zone_management": false, 00:11:00.393 "zone_append": false, 00:11:00.393 "compare": false, 00:11:00.393 "compare_and_write": false, 00:11:00.393 "abort": true, 00:11:00.393 "seek_hole": false, 00:11:00.393 "seek_data": false, 00:11:00.393 "copy": true, 00:11:00.393 "nvme_iov_md": false 00:11:00.393 }, 00:11:00.393 "memory_domains": [ 00:11:00.393 { 00:11:00.393 "dma_device_id": "system", 00:11:00.393 "dma_device_type": 1 00:11:00.393 }, 00:11:00.393 { 00:11:00.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.393 "dma_device_type": 2 00:11:00.393 } 00:11:00.393 ], 00:11:00.393 "driver_specific": {} 00:11:00.393 } 00:11:00.393 ] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.393 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.653 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.653 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.653 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.653 "name": "Existed_Raid", 00:11:00.653 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:00.653 "strip_size_kb": 64, 00:11:00.653 "state": "configuring", 00:11:00.653 "raid_level": "concat", 00:11:00.653 "superblock": true, 00:11:00.653 "num_base_bdevs": 4, 00:11:00.653 "num_base_bdevs_discovered": 3, 00:11:00.653 "num_base_bdevs_operational": 4, 00:11:00.653 "base_bdevs_list": [ 00:11:00.653 { 00:11:00.653 "name": "BaseBdev1", 00:11:00.653 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:00.653 "is_configured": true, 00:11:00.653 "data_offset": 2048, 00:11:00.653 "data_size": 63488 00:11:00.653 }, 00:11:00.653 { 00:11:00.653 "name": null, 00:11:00.653 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:00.653 "is_configured": false, 00:11:00.653 "data_offset": 0, 00:11:00.653 "data_size": 63488 00:11:00.653 }, 00:11:00.653 { 00:11:00.653 "name": "BaseBdev3", 00:11:00.653 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:00.653 "is_configured": true, 00:11:00.653 "data_offset": 2048, 00:11:00.653 "data_size": 63488 00:11:00.653 }, 00:11:00.653 { 00:11:00.653 "name": "BaseBdev4", 00:11:00.653 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:00.653 "is_configured": true, 00:11:00.653 "data_offset": 2048, 00:11:00.653 "data_size": 63488 00:11:00.653 } 00:11:00.653 ] 00:11:00.653 }' 00:11:00.653 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.653 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.911 [2024-12-12 09:24:34.900835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.911 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.170 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.170 "name": "Existed_Raid", 00:11:01.170 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:01.170 "strip_size_kb": 64, 00:11:01.170 "state": "configuring", 00:11:01.170 "raid_level": "concat", 00:11:01.170 "superblock": true, 00:11:01.170 "num_base_bdevs": 4, 00:11:01.170 "num_base_bdevs_discovered": 2, 00:11:01.170 "num_base_bdevs_operational": 4, 00:11:01.170 "base_bdevs_list": [ 00:11:01.170 { 00:11:01.170 "name": "BaseBdev1", 00:11:01.170 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:01.170 "is_configured": true, 00:11:01.170 "data_offset": 2048, 00:11:01.170 "data_size": 63488 00:11:01.170 }, 00:11:01.170 { 00:11:01.170 "name": null, 00:11:01.170 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:01.170 "is_configured": false, 00:11:01.170 "data_offset": 0, 00:11:01.170 "data_size": 63488 00:11:01.170 }, 00:11:01.170 { 00:11:01.170 "name": null, 00:11:01.170 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:01.170 "is_configured": false, 00:11:01.170 "data_offset": 0, 00:11:01.170 "data_size": 63488 00:11:01.170 }, 00:11:01.170 { 00:11:01.170 "name": "BaseBdev4", 00:11:01.170 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:01.170 "is_configured": true, 00:11:01.170 "data_offset": 2048, 00:11:01.170 "data_size": 63488 00:11:01.170 } 00:11:01.170 ] 00:11:01.170 }' 00:11:01.170 09:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.170 09:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 [2024-12-12 09:24:35.360014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.429 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.429 "name": "Existed_Raid", 00:11:01.429 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:01.429 "strip_size_kb": 64, 00:11:01.430 "state": "configuring", 00:11:01.430 "raid_level": "concat", 00:11:01.430 "superblock": true, 00:11:01.430 "num_base_bdevs": 4, 00:11:01.430 "num_base_bdevs_discovered": 3, 00:11:01.430 "num_base_bdevs_operational": 4, 00:11:01.430 "base_bdevs_list": [ 00:11:01.430 { 00:11:01.430 "name": "BaseBdev1", 00:11:01.430 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:01.430 "is_configured": true, 00:11:01.430 "data_offset": 2048, 00:11:01.430 "data_size": 63488 00:11:01.430 }, 00:11:01.430 { 00:11:01.430 "name": null, 00:11:01.430 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:01.430 "is_configured": false, 00:11:01.430 "data_offset": 0, 00:11:01.430 "data_size": 63488 00:11:01.430 }, 00:11:01.430 { 00:11:01.430 "name": "BaseBdev3", 00:11:01.430 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:01.430 "is_configured": true, 00:11:01.430 "data_offset": 2048, 00:11:01.430 "data_size": 63488 00:11:01.430 }, 00:11:01.430 { 00:11:01.430 "name": "BaseBdev4", 00:11:01.430 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:01.430 "is_configured": true, 00:11:01.430 "data_offset": 2048, 00:11:01.430 "data_size": 63488 00:11:01.430 } 00:11:01.430 ] 00:11:01.430 }' 00:11:01.430 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.430 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.998 [2024-12-12 09:24:35.839237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.998 "name": "Existed_Raid", 00:11:01.998 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:01.998 "strip_size_kb": 64, 00:11:01.998 "state": "configuring", 00:11:01.998 "raid_level": "concat", 00:11:01.998 "superblock": true, 00:11:01.998 "num_base_bdevs": 4, 00:11:01.998 "num_base_bdevs_discovered": 2, 00:11:01.998 "num_base_bdevs_operational": 4, 00:11:01.998 "base_bdevs_list": [ 00:11:01.998 { 00:11:01.998 "name": null, 00:11:01.998 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:01.998 "is_configured": false, 00:11:01.998 "data_offset": 0, 00:11:01.998 "data_size": 63488 00:11:01.998 }, 00:11:01.998 { 00:11:01.998 "name": null, 00:11:01.998 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:01.998 "is_configured": false, 00:11:01.998 "data_offset": 0, 00:11:01.998 "data_size": 63488 00:11:01.998 }, 00:11:01.998 { 00:11:01.998 "name": "BaseBdev3", 00:11:01.998 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:01.998 "is_configured": true, 00:11:01.998 "data_offset": 2048, 00:11:01.998 "data_size": 63488 00:11:01.998 }, 00:11:01.998 { 00:11:01.998 "name": "BaseBdev4", 00:11:01.998 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:01.998 "is_configured": true, 00:11:01.998 "data_offset": 2048, 00:11:01.998 "data_size": 63488 00:11:01.998 } 00:11:01.998 ] 00:11:01.998 }' 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.998 09:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.566 [2024-12-12 09:24:36.443938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.566 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.566 "name": "Existed_Raid", 00:11:02.566 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:02.566 "strip_size_kb": 64, 00:11:02.566 "state": "configuring", 00:11:02.566 "raid_level": "concat", 00:11:02.566 "superblock": true, 00:11:02.566 "num_base_bdevs": 4, 00:11:02.566 "num_base_bdevs_discovered": 3, 00:11:02.566 "num_base_bdevs_operational": 4, 00:11:02.566 "base_bdevs_list": [ 00:11:02.566 { 00:11:02.566 "name": null, 00:11:02.566 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:02.566 "is_configured": false, 00:11:02.566 "data_offset": 0, 00:11:02.566 "data_size": 63488 00:11:02.566 }, 00:11:02.566 { 00:11:02.566 "name": "BaseBdev2", 00:11:02.566 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:02.566 "is_configured": true, 00:11:02.566 "data_offset": 2048, 00:11:02.566 "data_size": 63488 00:11:02.566 }, 00:11:02.566 { 00:11:02.566 "name": "BaseBdev3", 00:11:02.566 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:02.566 "is_configured": true, 00:11:02.566 "data_offset": 2048, 00:11:02.566 "data_size": 63488 00:11:02.566 }, 00:11:02.566 { 00:11:02.566 "name": "BaseBdev4", 00:11:02.566 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:02.566 "is_configured": true, 00:11:02.566 "data_offset": 2048, 00:11:02.566 "data_size": 63488 00:11:02.566 } 00:11:02.566 ] 00:11:02.566 }' 00:11:02.567 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.567 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.135 09:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 [2024-12-12 09:24:37.012387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.135 [2024-12-12 09:24:37.012752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.135 [2024-12-12 09:24:37.012771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.135 [2024-12-12 09:24:37.013093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.135 NewBaseBdev 00:11:03.135 [2024-12-12 09:24:37.013255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.135 [2024-12-12 09:24:37.013273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:03.135 [2024-12-12 09:24:37.013427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.135 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.135 [ 00:11:03.135 { 00:11:03.135 "name": "NewBaseBdev", 00:11:03.135 "aliases": [ 00:11:03.135 "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1" 00:11:03.135 ], 00:11:03.135 "product_name": "Malloc disk", 00:11:03.135 "block_size": 512, 00:11:03.135 "num_blocks": 65536, 00:11:03.135 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:03.135 "assigned_rate_limits": { 00:11:03.135 "rw_ios_per_sec": 0, 00:11:03.135 "rw_mbytes_per_sec": 0, 00:11:03.135 "r_mbytes_per_sec": 0, 00:11:03.135 "w_mbytes_per_sec": 0 00:11:03.135 }, 00:11:03.135 "claimed": true, 00:11:03.135 "claim_type": "exclusive_write", 00:11:03.135 "zoned": false, 00:11:03.135 "supported_io_types": { 00:11:03.135 "read": true, 00:11:03.135 "write": true, 00:11:03.135 "unmap": true, 00:11:03.135 "flush": true, 00:11:03.135 "reset": true, 00:11:03.135 "nvme_admin": false, 00:11:03.135 "nvme_io": false, 00:11:03.135 "nvme_io_md": false, 00:11:03.135 "write_zeroes": true, 00:11:03.135 "zcopy": true, 00:11:03.135 "get_zone_info": false, 00:11:03.135 "zone_management": false, 00:11:03.135 "zone_append": false, 00:11:03.135 "compare": false, 00:11:03.135 "compare_and_write": false, 00:11:03.135 "abort": true, 00:11:03.135 "seek_hole": false, 00:11:03.135 "seek_data": false, 00:11:03.135 "copy": true, 00:11:03.135 "nvme_iov_md": false 00:11:03.135 }, 00:11:03.135 "memory_domains": [ 00:11:03.135 { 00:11:03.135 "dma_device_id": "system", 00:11:03.136 "dma_device_type": 1 00:11:03.136 }, 00:11:03.136 { 00:11:03.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.136 "dma_device_type": 2 00:11:03.136 } 00:11:03.136 ], 00:11:03.136 "driver_specific": {} 00:11:03.136 } 00:11:03.136 ] 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.136 "name": "Existed_Raid", 00:11:03.136 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:03.136 "strip_size_kb": 64, 00:11:03.136 "state": "online", 00:11:03.136 "raid_level": "concat", 00:11:03.136 "superblock": true, 00:11:03.136 "num_base_bdevs": 4, 00:11:03.136 "num_base_bdevs_discovered": 4, 00:11:03.136 "num_base_bdevs_operational": 4, 00:11:03.136 "base_bdevs_list": [ 00:11:03.136 { 00:11:03.136 "name": "NewBaseBdev", 00:11:03.136 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:03.136 "is_configured": true, 00:11:03.136 "data_offset": 2048, 00:11:03.136 "data_size": 63488 00:11:03.136 }, 00:11:03.136 { 00:11:03.136 "name": "BaseBdev2", 00:11:03.136 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:03.136 "is_configured": true, 00:11:03.136 "data_offset": 2048, 00:11:03.136 "data_size": 63488 00:11:03.136 }, 00:11:03.136 { 00:11:03.136 "name": "BaseBdev3", 00:11:03.136 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:03.136 "is_configured": true, 00:11:03.136 "data_offset": 2048, 00:11:03.136 "data_size": 63488 00:11:03.136 }, 00:11:03.136 { 00:11:03.136 "name": "BaseBdev4", 00:11:03.136 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:03.136 "is_configured": true, 00:11:03.136 "data_offset": 2048, 00:11:03.136 "data_size": 63488 00:11:03.136 } 00:11:03.136 ] 00:11:03.136 }' 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.136 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.705 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.705 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.705 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.706 [2024-12-12 09:24:37.487955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.706 "name": "Existed_Raid", 00:11:03.706 "aliases": [ 00:11:03.706 "2c89ffb7-944d-4e20-8b54-6332a3639bc2" 00:11:03.706 ], 00:11:03.706 "product_name": "Raid Volume", 00:11:03.706 "block_size": 512, 00:11:03.706 "num_blocks": 253952, 00:11:03.706 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:03.706 "assigned_rate_limits": { 00:11:03.706 "rw_ios_per_sec": 0, 00:11:03.706 "rw_mbytes_per_sec": 0, 00:11:03.706 "r_mbytes_per_sec": 0, 00:11:03.706 "w_mbytes_per_sec": 0 00:11:03.706 }, 00:11:03.706 "claimed": false, 00:11:03.706 "zoned": false, 00:11:03.706 "supported_io_types": { 00:11:03.706 "read": true, 00:11:03.706 "write": true, 00:11:03.706 "unmap": true, 00:11:03.706 "flush": true, 00:11:03.706 "reset": true, 00:11:03.706 "nvme_admin": false, 00:11:03.706 "nvme_io": false, 00:11:03.706 "nvme_io_md": false, 00:11:03.706 "write_zeroes": true, 00:11:03.706 "zcopy": false, 00:11:03.706 "get_zone_info": false, 00:11:03.706 "zone_management": false, 00:11:03.706 "zone_append": false, 00:11:03.706 "compare": false, 00:11:03.706 "compare_and_write": false, 00:11:03.706 "abort": false, 00:11:03.706 "seek_hole": false, 00:11:03.706 "seek_data": false, 00:11:03.706 "copy": false, 00:11:03.706 "nvme_iov_md": false 00:11:03.706 }, 00:11:03.706 "memory_domains": [ 00:11:03.706 { 00:11:03.706 "dma_device_id": "system", 00:11:03.706 "dma_device_type": 1 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.706 "dma_device_type": 2 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "system", 00:11:03.706 "dma_device_type": 1 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.706 "dma_device_type": 2 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "system", 00:11:03.706 "dma_device_type": 1 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.706 "dma_device_type": 2 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "system", 00:11:03.706 "dma_device_type": 1 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.706 "dma_device_type": 2 00:11:03.706 } 00:11:03.706 ], 00:11:03.706 "driver_specific": { 00:11:03.706 "raid": { 00:11:03.706 "uuid": "2c89ffb7-944d-4e20-8b54-6332a3639bc2", 00:11:03.706 "strip_size_kb": 64, 00:11:03.706 "state": "online", 00:11:03.706 "raid_level": "concat", 00:11:03.706 "superblock": true, 00:11:03.706 "num_base_bdevs": 4, 00:11:03.706 "num_base_bdevs_discovered": 4, 00:11:03.706 "num_base_bdevs_operational": 4, 00:11:03.706 "base_bdevs_list": [ 00:11:03.706 { 00:11:03.706 "name": "NewBaseBdev", 00:11:03.706 "uuid": "9bcbbc8c-5be6-4a12-8dbf-f03e76a5f0a1", 00:11:03.706 "is_configured": true, 00:11:03.706 "data_offset": 2048, 00:11:03.706 "data_size": 63488 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "name": "BaseBdev2", 00:11:03.706 "uuid": "3b3a1e9c-128e-4136-b49f-d73af229f874", 00:11:03.706 "is_configured": true, 00:11:03.706 "data_offset": 2048, 00:11:03.706 "data_size": 63488 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "name": "BaseBdev3", 00:11:03.706 "uuid": "18569586-1b99-4b76-890e-8ba18793d28e", 00:11:03.706 "is_configured": true, 00:11:03.706 "data_offset": 2048, 00:11:03.706 "data_size": 63488 00:11:03.706 }, 00:11:03.706 { 00:11:03.706 "name": "BaseBdev4", 00:11:03.706 "uuid": "dbfab1b1-bd69-491f-a4d8-ca99466b0e6a", 00:11:03.706 "is_configured": true, 00:11:03.706 "data_offset": 2048, 00:11:03.706 "data_size": 63488 00:11:03.706 } 00:11:03.706 ] 00:11:03.706 } 00:11:03.706 } 00:11:03.706 }' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.706 BaseBdev2 00:11:03.706 BaseBdev3 00:11:03.706 BaseBdev4' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.706 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.966 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.967 [2024-12-12 09:24:37.815073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.967 [2024-12-12 09:24:37.815148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.967 [2024-12-12 09:24:37.815269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.967 [2024-12-12 09:24:37.815370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.967 [2024-12-12 09:24:37.815424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73087 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73087 ']' 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73087 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73087 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.967 killing process with pid 73087 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73087' 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73087 00:11:03.967 [2024-12-12 09:24:37.864504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.967 09:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73087 00:11:04.537 [2024-12-12 09:24:38.282861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.475 09:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.475 00:11:05.475 real 0m11.675s 00:11:05.475 user 0m18.200s 00:11:05.475 sys 0m2.271s 00:11:05.475 09:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.475 09:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.475 ************************************ 00:11:05.475 END TEST raid_state_function_test_sb 00:11:05.475 ************************************ 00:11:05.734 09:24:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:05.734 09:24:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:05.734 09:24:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.734 09:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.734 ************************************ 00:11:05.734 START TEST raid_superblock_test 00:11:05.734 ************************************ 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:05.734 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73766 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73766 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73766 ']' 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.735 09:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.735 [2024-12-12 09:24:39.637761] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:05.735 [2024-12-12 09:24:39.637989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73766 ] 00:11:05.993 [2024-12-12 09:24:39.812994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.993 [2024-12-12 09:24:39.942805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.253 [2024-12-12 09:24:40.169203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.253 [2024-12-12 09:24:40.169365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.513 malloc1 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.513 [2024-12-12 09:24:40.503537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.513 [2024-12-12 09:24:40.503715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.513 [2024-12-12 09:24:40.503759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:06.513 [2024-12-12 09:24:40.503788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.513 [2024-12-12 09:24:40.506249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.513 [2024-12-12 09:24:40.506322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.513 pt1 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.513 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.772 malloc2 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.772 [2024-12-12 09:24:40.569042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.772 [2024-12-12 09:24:40.569097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.772 [2024-12-12 09:24:40.569121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:06.772 [2024-12-12 09:24:40.569130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.772 [2024-12-12 09:24:40.571503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.772 [2024-12-12 09:24:40.571608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.772 pt2 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.772 malloc3 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.772 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.773 [2024-12-12 09:24:40.641150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.773 [2024-12-12 09:24:40.641278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.773 [2024-12-12 09:24:40.641319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:06.773 [2024-12-12 09:24:40.641347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.773 [2024-12-12 09:24:40.643735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.773 [2024-12-12 09:24:40.643809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.773 pt3 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.773 malloc4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.773 [2024-12-12 09:24:40.706129] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:06.773 [2024-12-12 09:24:40.706240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.773 [2024-12-12 09:24:40.706279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:06.773 [2024-12-12 09:24:40.706307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.773 [2024-12-12 09:24:40.708672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.773 [2024-12-12 09:24:40.708742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:06.773 pt4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.773 [2024-12-12 09:24:40.718122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.773 [2024-12-12 09:24:40.720253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.773 [2024-12-12 09:24:40.720382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.773 [2024-12-12 09:24:40.720475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:06.773 [2024-12-12 09:24:40.720715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:06.773 [2024-12-12 09:24:40.720763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.773 [2024-12-12 09:24:40.721056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.773 [2024-12-12 09:24:40.721280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:06.773 [2024-12-12 09:24:40.721329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:06.773 [2024-12-12 09:24:40.721523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.773 "name": "raid_bdev1", 00:11:06.773 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:06.773 "strip_size_kb": 64, 00:11:06.773 "state": "online", 00:11:06.773 "raid_level": "concat", 00:11:06.773 "superblock": true, 00:11:06.773 "num_base_bdevs": 4, 00:11:06.773 "num_base_bdevs_discovered": 4, 00:11:06.773 "num_base_bdevs_operational": 4, 00:11:06.773 "base_bdevs_list": [ 00:11:06.773 { 00:11:06.773 "name": "pt1", 00:11:06.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.773 "is_configured": true, 00:11:06.773 "data_offset": 2048, 00:11:06.773 "data_size": 63488 00:11:06.773 }, 00:11:06.773 { 00:11:06.773 "name": "pt2", 00:11:06.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.773 "is_configured": true, 00:11:06.773 "data_offset": 2048, 00:11:06.773 "data_size": 63488 00:11:06.773 }, 00:11:06.773 { 00:11:06.773 "name": "pt3", 00:11:06.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.773 "is_configured": true, 00:11:06.773 "data_offset": 2048, 00:11:06.773 "data_size": 63488 00:11:06.773 }, 00:11:06.773 { 00:11:06.773 "name": "pt4", 00:11:06.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.773 "is_configured": true, 00:11:06.773 "data_offset": 2048, 00:11:06.773 "data_size": 63488 00:11:06.773 } 00:11:06.773 ] 00:11:06.773 }' 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.773 09:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.341 [2024-12-12 09:24:41.181593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.341 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.341 "name": "raid_bdev1", 00:11:07.341 "aliases": [ 00:11:07.341 "97078094-1380-4623-9966-a2c29edce945" 00:11:07.341 ], 00:11:07.341 "product_name": "Raid Volume", 00:11:07.341 "block_size": 512, 00:11:07.341 "num_blocks": 253952, 00:11:07.341 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:07.341 "assigned_rate_limits": { 00:11:07.341 "rw_ios_per_sec": 0, 00:11:07.341 "rw_mbytes_per_sec": 0, 00:11:07.341 "r_mbytes_per_sec": 0, 00:11:07.341 "w_mbytes_per_sec": 0 00:11:07.341 }, 00:11:07.341 "claimed": false, 00:11:07.341 "zoned": false, 00:11:07.341 "supported_io_types": { 00:11:07.341 "read": true, 00:11:07.341 "write": true, 00:11:07.341 "unmap": true, 00:11:07.341 "flush": true, 00:11:07.341 "reset": true, 00:11:07.341 "nvme_admin": false, 00:11:07.341 "nvme_io": false, 00:11:07.341 "nvme_io_md": false, 00:11:07.341 "write_zeroes": true, 00:11:07.341 "zcopy": false, 00:11:07.341 "get_zone_info": false, 00:11:07.341 "zone_management": false, 00:11:07.341 "zone_append": false, 00:11:07.341 "compare": false, 00:11:07.341 "compare_and_write": false, 00:11:07.341 "abort": false, 00:11:07.341 "seek_hole": false, 00:11:07.342 "seek_data": false, 00:11:07.342 "copy": false, 00:11:07.342 "nvme_iov_md": false 00:11:07.342 }, 00:11:07.342 "memory_domains": [ 00:11:07.342 { 00:11:07.342 "dma_device_id": "system", 00:11:07.342 "dma_device_type": 1 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.342 "dma_device_type": 2 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "system", 00:11:07.342 "dma_device_type": 1 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.342 "dma_device_type": 2 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "system", 00:11:07.342 "dma_device_type": 1 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.342 "dma_device_type": 2 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "system", 00:11:07.342 "dma_device_type": 1 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.342 "dma_device_type": 2 00:11:07.342 } 00:11:07.342 ], 00:11:07.342 "driver_specific": { 00:11:07.342 "raid": { 00:11:07.342 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:07.342 "strip_size_kb": 64, 00:11:07.342 "state": "online", 00:11:07.342 "raid_level": "concat", 00:11:07.342 "superblock": true, 00:11:07.342 "num_base_bdevs": 4, 00:11:07.342 "num_base_bdevs_discovered": 4, 00:11:07.342 "num_base_bdevs_operational": 4, 00:11:07.342 "base_bdevs_list": [ 00:11:07.342 { 00:11:07.342 "name": "pt1", 00:11:07.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.342 "is_configured": true, 00:11:07.342 "data_offset": 2048, 00:11:07.342 "data_size": 63488 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "name": "pt2", 00:11:07.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.342 "is_configured": true, 00:11:07.342 "data_offset": 2048, 00:11:07.342 "data_size": 63488 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "name": "pt3", 00:11:07.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.342 "is_configured": true, 00:11:07.342 "data_offset": 2048, 00:11:07.342 "data_size": 63488 00:11:07.342 }, 00:11:07.342 { 00:11:07.342 "name": "pt4", 00:11:07.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.342 "is_configured": true, 00:11:07.342 "data_offset": 2048, 00:11:07.342 "data_size": 63488 00:11:07.342 } 00:11:07.342 ] 00:11:07.342 } 00:11:07.342 } 00:11:07.342 }' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.342 pt2 00:11:07.342 pt3 00:11:07.342 pt4' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.342 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 [2024-12-12 09:24:41.520947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=97078094-1380-4623-9966-a2c29edce945 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 97078094-1380-4623-9966-a2c29edce945 ']' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 [2024-12-12 09:24:41.564590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.601 [2024-12-12 09:24:41.564659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.601 [2024-12-12 09:24:41.564760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.601 [2024-12-12 09:24:41.564853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.601 [2024-12-12 09:24:41.564903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.601 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 [2024-12-12 09:24:41.712358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:07.861 [2024-12-12 09:24:41.714469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:07.861 [2024-12-12 09:24:41.714559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:07.861 [2024-12-12 09:24:41.714611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:07.861 [2024-12-12 09:24:41.714712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:07.861 [2024-12-12 09:24:41.714798] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:07.861 [2024-12-12 09:24:41.714854] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:07.862 [2024-12-12 09:24:41.714895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:07.862 [2024-12-12 09:24:41.714908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.862 [2024-12-12 09:24:41.714919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:07.862 request: 00:11:07.862 { 00:11:07.862 "name": "raid_bdev1", 00:11:07.862 "raid_level": "concat", 00:11:07.862 "base_bdevs": [ 00:11:07.862 "malloc1", 00:11:07.862 "malloc2", 00:11:07.862 "malloc3", 00:11:07.862 "malloc4" 00:11:07.862 ], 00:11:07.862 "strip_size_kb": 64, 00:11:07.862 "superblock": false, 00:11:07.862 "method": "bdev_raid_create", 00:11:07.862 "req_id": 1 00:11:07.862 } 00:11:07.862 Got JSON-RPC error response 00:11:07.862 response: 00:11:07.862 { 00:11:07.862 "code": -17, 00:11:07.862 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:07.862 } 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.862 [2024-12-12 09:24:41.768239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.862 [2024-12-12 09:24:41.768329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.862 [2024-12-12 09:24:41.768361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:07.862 [2024-12-12 09:24:41.768390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.862 [2024-12-12 09:24:41.770777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.862 [2024-12-12 09:24:41.770850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.862 [2024-12-12 09:24:41.770945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.862 [2024-12-12 09:24:41.771037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.862 pt1 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.862 "name": "raid_bdev1", 00:11:07.862 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:07.862 "strip_size_kb": 64, 00:11:07.862 "state": "configuring", 00:11:07.862 "raid_level": "concat", 00:11:07.862 "superblock": true, 00:11:07.862 "num_base_bdevs": 4, 00:11:07.862 "num_base_bdevs_discovered": 1, 00:11:07.862 "num_base_bdevs_operational": 4, 00:11:07.862 "base_bdevs_list": [ 00:11:07.862 { 00:11:07.862 "name": "pt1", 00:11:07.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.862 "is_configured": true, 00:11:07.862 "data_offset": 2048, 00:11:07.862 "data_size": 63488 00:11:07.862 }, 00:11:07.862 { 00:11:07.862 "name": null, 00:11:07.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.862 "is_configured": false, 00:11:07.862 "data_offset": 2048, 00:11:07.862 "data_size": 63488 00:11:07.862 }, 00:11:07.862 { 00:11:07.862 "name": null, 00:11:07.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.862 "is_configured": false, 00:11:07.862 "data_offset": 2048, 00:11:07.862 "data_size": 63488 00:11:07.862 }, 00:11:07.862 { 00:11:07.862 "name": null, 00:11:07.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.862 "is_configured": false, 00:11:07.862 "data_offset": 2048, 00:11:07.862 "data_size": 63488 00:11:07.862 } 00:11:07.862 ] 00:11:07.862 }' 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.862 09:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.432 [2024-12-12 09:24:42.215537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.432 [2024-12-12 09:24:42.215612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.432 [2024-12-12 09:24:42.215632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:08.432 [2024-12-12 09:24:42.215644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.432 [2024-12-12 09:24:42.216132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.432 [2024-12-12 09:24:42.216155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.432 [2024-12-12 09:24:42.216232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.432 [2024-12-12 09:24:42.216256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.432 pt2 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.432 [2024-12-12 09:24:42.223528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.432 "name": "raid_bdev1", 00:11:08.432 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:08.432 "strip_size_kb": 64, 00:11:08.432 "state": "configuring", 00:11:08.432 "raid_level": "concat", 00:11:08.432 "superblock": true, 00:11:08.432 "num_base_bdevs": 4, 00:11:08.432 "num_base_bdevs_discovered": 1, 00:11:08.432 "num_base_bdevs_operational": 4, 00:11:08.432 "base_bdevs_list": [ 00:11:08.432 { 00:11:08.432 "name": "pt1", 00:11:08.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.432 "is_configured": true, 00:11:08.432 "data_offset": 2048, 00:11:08.432 "data_size": 63488 00:11:08.432 }, 00:11:08.432 { 00:11:08.432 "name": null, 00:11:08.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.432 "is_configured": false, 00:11:08.432 "data_offset": 0, 00:11:08.432 "data_size": 63488 00:11:08.432 }, 00:11:08.432 { 00:11:08.432 "name": null, 00:11:08.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.432 "is_configured": false, 00:11:08.432 "data_offset": 2048, 00:11:08.432 "data_size": 63488 00:11:08.432 }, 00:11:08.432 { 00:11:08.432 "name": null, 00:11:08.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.432 "is_configured": false, 00:11:08.432 "data_offset": 2048, 00:11:08.432 "data_size": 63488 00:11:08.432 } 00:11:08.432 ] 00:11:08.432 }' 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.432 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 [2024-12-12 09:24:42.646775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.691 [2024-12-12 09:24:42.646834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.691 [2024-12-12 09:24:42.646855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:08.691 [2024-12-12 09:24:42.646864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.691 [2024-12-12 09:24:42.647351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.691 [2024-12-12 09:24:42.647381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.691 [2024-12-12 09:24:42.647457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.691 [2024-12-12 09:24:42.647477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.691 pt2 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 [2024-12-12 09:24:42.658737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.691 [2024-12-12 09:24:42.658788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.691 [2024-12-12 09:24:42.658806] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:08.691 [2024-12-12 09:24:42.658814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.691 [2024-12-12 09:24:42.659223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.691 [2024-12-12 09:24:42.659251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.691 [2024-12-12 09:24:42.659310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:08.691 [2024-12-12 09:24:42.659335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.691 pt3 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.691 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.692 [2024-12-12 09:24:42.670700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:08.692 [2024-12-12 09:24:42.670759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.692 [2024-12-12 09:24:42.670774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:08.692 [2024-12-12 09:24:42.670782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.692 [2024-12-12 09:24:42.671193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.692 [2024-12-12 09:24:42.671221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:08.692 [2024-12-12 09:24:42.671282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:08.692 [2024-12-12 09:24:42.671303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:08.692 [2024-12-12 09:24:42.671434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.692 [2024-12-12 09:24:42.671450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.692 [2024-12-12 09:24:42.671703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:08.692 [2024-12-12 09:24:42.671862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.692 [2024-12-12 09:24:42.671880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:08.692 [2024-12-12 09:24:42.672014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.692 pt4 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.692 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.951 "name": "raid_bdev1", 00:11:08.951 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:08.951 "strip_size_kb": 64, 00:11:08.951 "state": "online", 00:11:08.951 "raid_level": "concat", 00:11:08.951 "superblock": true, 00:11:08.951 "num_base_bdevs": 4, 00:11:08.951 "num_base_bdevs_discovered": 4, 00:11:08.951 "num_base_bdevs_operational": 4, 00:11:08.951 "base_bdevs_list": [ 00:11:08.951 { 00:11:08.951 "name": "pt1", 00:11:08.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "pt2", 00:11:08.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "pt3", 00:11:08.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "pt4", 00:11:08.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 } 00:11:08.951 ] 00:11:08.951 }' 00:11:08.951 09:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.951 09:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.210 [2024-12-12 09:24:43.094348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.210 "name": "raid_bdev1", 00:11:09.210 "aliases": [ 00:11:09.210 "97078094-1380-4623-9966-a2c29edce945" 00:11:09.210 ], 00:11:09.210 "product_name": "Raid Volume", 00:11:09.210 "block_size": 512, 00:11:09.210 "num_blocks": 253952, 00:11:09.210 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:09.210 "assigned_rate_limits": { 00:11:09.210 "rw_ios_per_sec": 0, 00:11:09.210 "rw_mbytes_per_sec": 0, 00:11:09.210 "r_mbytes_per_sec": 0, 00:11:09.210 "w_mbytes_per_sec": 0 00:11:09.210 }, 00:11:09.210 "claimed": false, 00:11:09.210 "zoned": false, 00:11:09.210 "supported_io_types": { 00:11:09.210 "read": true, 00:11:09.210 "write": true, 00:11:09.210 "unmap": true, 00:11:09.210 "flush": true, 00:11:09.210 "reset": true, 00:11:09.210 "nvme_admin": false, 00:11:09.210 "nvme_io": false, 00:11:09.210 "nvme_io_md": false, 00:11:09.210 "write_zeroes": true, 00:11:09.210 "zcopy": false, 00:11:09.210 "get_zone_info": false, 00:11:09.210 "zone_management": false, 00:11:09.210 "zone_append": false, 00:11:09.210 "compare": false, 00:11:09.210 "compare_and_write": false, 00:11:09.210 "abort": false, 00:11:09.210 "seek_hole": false, 00:11:09.210 "seek_data": false, 00:11:09.210 "copy": false, 00:11:09.210 "nvme_iov_md": false 00:11:09.210 }, 00:11:09.210 "memory_domains": [ 00:11:09.210 { 00:11:09.210 "dma_device_id": "system", 00:11:09.210 "dma_device_type": 1 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.210 "dma_device_type": 2 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "system", 00:11:09.210 "dma_device_type": 1 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.210 "dma_device_type": 2 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "system", 00:11:09.210 "dma_device_type": 1 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.210 "dma_device_type": 2 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "system", 00:11:09.210 "dma_device_type": 1 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.210 "dma_device_type": 2 00:11:09.210 } 00:11:09.210 ], 00:11:09.210 "driver_specific": { 00:11:09.210 "raid": { 00:11:09.210 "uuid": "97078094-1380-4623-9966-a2c29edce945", 00:11:09.210 "strip_size_kb": 64, 00:11:09.210 "state": "online", 00:11:09.210 "raid_level": "concat", 00:11:09.210 "superblock": true, 00:11:09.210 "num_base_bdevs": 4, 00:11:09.210 "num_base_bdevs_discovered": 4, 00:11:09.210 "num_base_bdevs_operational": 4, 00:11:09.210 "base_bdevs_list": [ 00:11:09.210 { 00:11:09.210 "name": "pt1", 00:11:09.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.210 "is_configured": true, 00:11:09.210 "data_offset": 2048, 00:11:09.210 "data_size": 63488 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "name": "pt2", 00:11:09.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.210 "is_configured": true, 00:11:09.210 "data_offset": 2048, 00:11:09.210 "data_size": 63488 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "name": "pt3", 00:11:09.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.210 "is_configured": true, 00:11:09.210 "data_offset": 2048, 00:11:09.210 "data_size": 63488 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "name": "pt4", 00:11:09.210 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.210 "is_configured": true, 00:11:09.210 "data_offset": 2048, 00:11:09.210 "data_size": 63488 00:11:09.210 } 00:11:09.210 ] 00:11:09.210 } 00:11:09.210 } 00:11:09.210 }' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.210 pt2 00:11:09.210 pt3 00:11:09.210 pt4' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.210 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.470 [2024-12-12 09:24:43.381772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 97078094-1380-4623-9966-a2c29edce945 '!=' 97078094-1380-4623-9966-a2c29edce945 ']' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73766 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73766 ']' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73766 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.470 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73766 00:11:09.471 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.471 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.471 killing process with pid 73766 00:11:09.471 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73766' 00:11:09.471 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73766 00:11:09.471 [2024-12-12 09:24:43.462210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.471 [2024-12-12 09:24:43.462299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.471 09:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73766 00:11:09.471 [2024-12-12 09:24:43.462385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.471 [2024-12-12 09:24:43.462396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:10.040 [2024-12-12 09:24:43.872556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.419 09:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:11.419 00:11:11.419 real 0m5.485s 00:11:11.419 user 0m7.655s 00:11:11.419 sys 0m1.081s 00:11:11.419 09:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.419 09:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.419 ************************************ 00:11:11.419 END TEST raid_superblock_test 00:11:11.419 ************************************ 00:11:11.419 09:24:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:11.419 09:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.419 09:24:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.419 09:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.419 ************************************ 00:11:11.419 START TEST raid_read_error_test 00:11:11.419 ************************************ 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jWLKcaJppo 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74025 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74025 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74025 ']' 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.419 09:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.419 [2024-12-12 09:24:45.221628] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:11.419 [2024-12-12 09:24:45.221769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74025 ] 00:11:11.419 [2024-12-12 09:24:45.399008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.678 [2024-12-12 09:24:45.538664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.938 [2024-12-12 09:24:45.773441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.938 [2024-12-12 09:24:45.773496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.197 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.197 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.197 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 BaseBdev1_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 true 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 [2024-12-12 09:24:46.092710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.198 [2024-12-12 09:24:46.092781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.198 [2024-12-12 09:24:46.092818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.198 [2024-12-12 09:24:46.092840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.198 [2024-12-12 09:24:46.095234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.198 [2024-12-12 09:24:46.095272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.198 BaseBdev1 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 BaseBdev2_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 true 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 [2024-12-12 09:24:46.166432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.198 [2024-12-12 09:24:46.166503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.198 [2024-12-12 09:24:46.166519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.198 [2024-12-12 09:24:46.166531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.198 [2024-12-12 09:24:46.168900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.198 [2024-12-12 09:24:46.168941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.198 BaseBdev2 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.198 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 BaseBdev3_malloc 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 true 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [2024-12-12 09:24:46.267371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.457 [2024-12-12 09:24:46.267426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.457 [2024-12-12 09:24:46.267460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.457 [2024-12-12 09:24:46.267472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.457 [2024-12-12 09:24:46.269855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.457 [2024-12-12 09:24:46.269894] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.457 BaseBdev3 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 BaseBdev4_malloc 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 true 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [2024-12-12 09:24:46.340077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:12.457 [2024-12-12 09:24:46.340133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.457 [2024-12-12 09:24:46.340151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:12.457 [2024-12-12 09:24:46.340162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.457 [2024-12-12 09:24:46.342475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.457 [2024-12-12 09:24:46.342514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:12.457 BaseBdev4 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.457 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [2024-12-12 09:24:46.352135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.457 [2024-12-12 09:24:46.354209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.457 [2024-12-12 09:24:46.354301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.457 [2024-12-12 09:24:46.354360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.457 [2024-12-12 09:24:46.354577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:12.457 [2024-12-12 09:24:46.354595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.457 [2024-12-12 09:24:46.354828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:12.457 [2024-12-12 09:24:46.355030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:12.458 [2024-12-12 09:24:46.355051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:12.458 [2024-12-12 09:24:46.355213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.458 "name": "raid_bdev1", 00:11:12.458 "uuid": "e9b30d7d-0964-4df2-9b77-82e3e2927ad4", 00:11:12.458 "strip_size_kb": 64, 00:11:12.458 "state": "online", 00:11:12.458 "raid_level": "concat", 00:11:12.458 "superblock": true, 00:11:12.458 "num_base_bdevs": 4, 00:11:12.458 "num_base_bdevs_discovered": 4, 00:11:12.458 "num_base_bdevs_operational": 4, 00:11:12.458 "base_bdevs_list": [ 00:11:12.458 { 00:11:12.458 "name": "BaseBdev1", 00:11:12.458 "uuid": "46992ae9-97e0-5af7-99bf-df593a93211b", 00:11:12.458 "is_configured": true, 00:11:12.458 "data_offset": 2048, 00:11:12.458 "data_size": 63488 00:11:12.458 }, 00:11:12.458 { 00:11:12.458 "name": "BaseBdev2", 00:11:12.458 "uuid": "ee683497-812a-5480-9bea-b9d11074fce7", 00:11:12.458 "is_configured": true, 00:11:12.458 "data_offset": 2048, 00:11:12.458 "data_size": 63488 00:11:12.458 }, 00:11:12.458 { 00:11:12.458 "name": "BaseBdev3", 00:11:12.458 "uuid": "463f6da2-ef34-5fc2-8395-af35e752bf14", 00:11:12.458 "is_configured": true, 00:11:12.458 "data_offset": 2048, 00:11:12.458 "data_size": 63488 00:11:12.458 }, 00:11:12.458 { 00:11:12.458 "name": "BaseBdev4", 00:11:12.458 "uuid": "0be4b2ec-b3a9-591c-9283-eab69e6e9e0f", 00:11:12.458 "is_configured": true, 00:11:12.458 "data_offset": 2048, 00:11:12.458 "data_size": 63488 00:11:12.458 } 00:11:12.458 ] 00:11:12.458 }' 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.458 09:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.026 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.026 09:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.026 [2024-12-12 09:24:46.884667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.963 "name": "raid_bdev1", 00:11:13.963 "uuid": "e9b30d7d-0964-4df2-9b77-82e3e2927ad4", 00:11:13.963 "strip_size_kb": 64, 00:11:13.963 "state": "online", 00:11:13.963 "raid_level": "concat", 00:11:13.963 "superblock": true, 00:11:13.963 "num_base_bdevs": 4, 00:11:13.963 "num_base_bdevs_discovered": 4, 00:11:13.963 "num_base_bdevs_operational": 4, 00:11:13.963 "base_bdevs_list": [ 00:11:13.963 { 00:11:13.963 "name": "BaseBdev1", 00:11:13.963 "uuid": "46992ae9-97e0-5af7-99bf-df593a93211b", 00:11:13.963 "is_configured": true, 00:11:13.963 "data_offset": 2048, 00:11:13.963 "data_size": 63488 00:11:13.963 }, 00:11:13.963 { 00:11:13.963 "name": "BaseBdev2", 00:11:13.963 "uuid": "ee683497-812a-5480-9bea-b9d11074fce7", 00:11:13.963 "is_configured": true, 00:11:13.963 "data_offset": 2048, 00:11:13.963 "data_size": 63488 00:11:13.963 }, 00:11:13.963 { 00:11:13.963 "name": "BaseBdev3", 00:11:13.963 "uuid": "463f6da2-ef34-5fc2-8395-af35e752bf14", 00:11:13.963 "is_configured": true, 00:11:13.963 "data_offset": 2048, 00:11:13.963 "data_size": 63488 00:11:13.963 }, 00:11:13.963 { 00:11:13.963 "name": "BaseBdev4", 00:11:13.963 "uuid": "0be4b2ec-b3a9-591c-9283-eab69e6e9e0f", 00:11:13.963 "is_configured": true, 00:11:13.963 "data_offset": 2048, 00:11:13.963 "data_size": 63488 00:11:13.963 } 00:11:13.963 ] 00:11:13.963 }' 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.963 09:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.539 09:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.539 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.539 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.539 [2024-12-12 09:24:48.294595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.540 [2024-12-12 09:24:48.294640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.540 [2024-12-12 09:24:48.297426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.540 [2024-12-12 09:24:48.297500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.540 [2024-12-12 09:24:48.297551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.540 [2024-12-12 09:24:48.297565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:14.540 { 00:11:14.540 "results": [ 00:11:14.540 { 00:11:14.540 "job": "raid_bdev1", 00:11:14.540 "core_mask": "0x1", 00:11:14.540 "workload": "randrw", 00:11:14.540 "percentage": 50, 00:11:14.540 "status": "finished", 00:11:14.540 "queue_depth": 1, 00:11:14.540 "io_size": 131072, 00:11:14.540 "runtime": 1.410614, 00:11:14.540 "iops": 13500.504035831205, 00:11:14.540 "mibps": 1687.5630044789007, 00:11:14.540 "io_failed": 1, 00:11:14.540 "io_timeout": 0, 00:11:14.540 "avg_latency_us": 104.23170716104468, 00:11:14.540 "min_latency_us": 26.047161572052403, 00:11:14.540 "max_latency_us": 1352.216593886463 00:11:14.540 } 00:11:14.540 ], 00:11:14.540 "core_count": 1 00:11:14.540 } 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74025 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74025 ']' 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74025 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74025 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.540 killing process with pid 74025 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74025' 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74025 00:11:14.540 [2024-12-12 09:24:48.346127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.540 09:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74025 00:11:14.812 [2024-12-12 09:24:48.694887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.192 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.192 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jWLKcaJppo 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:16.193 00:11:16.193 real 0m4.845s 00:11:16.193 user 0m5.560s 00:11:16.193 sys 0m0.718s 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.193 09:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.193 ************************************ 00:11:16.193 END TEST raid_read_error_test 00:11:16.193 ************************************ 00:11:16.193 09:24:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:16.193 09:24:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.193 09:24:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.193 09:24:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.193 ************************************ 00:11:16.193 START TEST raid_write_error_test 00:11:16.193 ************************************ 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7YFAg9fnry 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74176 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74176 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74176 ']' 00:11:16.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.193 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.193 [2024-12-12 09:24:50.139306] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:16.193 [2024-12-12 09:24:50.139421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74176 ] 00:11:16.453 [2024-12-12 09:24:50.313292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.453 [2024-12-12 09:24:50.444175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.713 [2024-12-12 09:24:50.674699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.713 [2024-12-12 09:24:50.674768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.973 09:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 BaseBdev1_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 true 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 [2024-12-12 09:24:51.037731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.233 [2024-12-12 09:24:51.037882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.233 [2024-12-12 09:24:51.037925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.233 [2024-12-12 09:24:51.037966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.233 [2024-12-12 09:24:51.040383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.233 [2024-12-12 09:24:51.040465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.233 BaseBdev1 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 BaseBdev2_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 true 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 [2024-12-12 09:24:51.110054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.233 [2024-12-12 09:24:51.110118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.233 [2024-12-12 09:24:51.110135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.233 [2024-12-12 09:24:51.110147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.233 [2024-12-12 09:24:51.112606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.233 [2024-12-12 09:24:51.112697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.233 BaseBdev2 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 BaseBdev3_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 true 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.233 [2024-12-12 09:24:51.212880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.233 [2024-12-12 09:24:51.212937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.233 [2024-12-12 09:24:51.212983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.233 [2024-12-12 09:24:51.212995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.233 [2024-12-12 09:24:51.215450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.233 [2024-12-12 09:24:51.215525] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.233 BaseBdev3 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.233 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.492 BaseBdev4_malloc 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.492 true 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.492 [2024-12-12 09:24:51.285407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.492 [2024-12-12 09:24:51.285466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.492 [2024-12-12 09:24:51.285498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.492 [2024-12-12 09:24:51.285511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.492 [2024-12-12 09:24:51.287874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.492 [2024-12-12 09:24:51.288011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.492 BaseBdev4 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.492 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.492 [2024-12-12 09:24:51.297454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.492 [2024-12-12 09:24:51.299520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.492 [2024-12-12 09:24:51.299665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.493 [2024-12-12 09:24:51.299732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.493 [2024-12-12 09:24:51.299970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.493 [2024-12-12 09:24:51.299986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.493 [2024-12-12 09:24:51.300219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.493 [2024-12-12 09:24:51.300392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.493 [2024-12-12 09:24:51.300406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.493 [2024-12-12 09:24:51.300562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.493 "name": "raid_bdev1", 00:11:17.493 "uuid": "83934f87-ff30-4a5e-a6fe-42ff1fb9f50f", 00:11:17.493 "strip_size_kb": 64, 00:11:17.493 "state": "online", 00:11:17.493 "raid_level": "concat", 00:11:17.493 "superblock": true, 00:11:17.493 "num_base_bdevs": 4, 00:11:17.493 "num_base_bdevs_discovered": 4, 00:11:17.493 "num_base_bdevs_operational": 4, 00:11:17.493 "base_bdevs_list": [ 00:11:17.493 { 00:11:17.493 "name": "BaseBdev1", 00:11:17.493 "uuid": "6a408aab-7a97-5e69-a63d-7f1bf4151c21", 00:11:17.493 "is_configured": true, 00:11:17.493 "data_offset": 2048, 00:11:17.493 "data_size": 63488 00:11:17.493 }, 00:11:17.493 { 00:11:17.493 "name": "BaseBdev2", 00:11:17.493 "uuid": "2cf2e3e1-a18f-5716-a7f5-78abafaad721", 00:11:17.493 "is_configured": true, 00:11:17.493 "data_offset": 2048, 00:11:17.493 "data_size": 63488 00:11:17.493 }, 00:11:17.493 { 00:11:17.493 "name": "BaseBdev3", 00:11:17.493 "uuid": "00ad16a5-06cd-5301-b042-395fc8610b02", 00:11:17.493 "is_configured": true, 00:11:17.493 "data_offset": 2048, 00:11:17.493 "data_size": 63488 00:11:17.493 }, 00:11:17.493 { 00:11:17.493 "name": "BaseBdev4", 00:11:17.493 "uuid": "2d46ed4b-8a41-513f-a283-504c1237357d", 00:11:17.493 "is_configured": true, 00:11:17.493 "data_offset": 2048, 00:11:17.493 "data_size": 63488 00:11:17.493 } 00:11:17.493 ] 00:11:17.493 }' 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.493 09:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.752 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.752 09:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.011 [2024-12-12 09:24:51.834029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:18.949 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.950 "name": "raid_bdev1", 00:11:18.950 "uuid": "83934f87-ff30-4a5e-a6fe-42ff1fb9f50f", 00:11:18.950 "strip_size_kb": 64, 00:11:18.950 "state": "online", 00:11:18.950 "raid_level": "concat", 00:11:18.950 "superblock": true, 00:11:18.950 "num_base_bdevs": 4, 00:11:18.950 "num_base_bdevs_discovered": 4, 00:11:18.950 "num_base_bdevs_operational": 4, 00:11:18.950 "base_bdevs_list": [ 00:11:18.950 { 00:11:18.950 "name": "BaseBdev1", 00:11:18.950 "uuid": "6a408aab-7a97-5e69-a63d-7f1bf4151c21", 00:11:18.950 "is_configured": true, 00:11:18.950 "data_offset": 2048, 00:11:18.950 "data_size": 63488 00:11:18.950 }, 00:11:18.950 { 00:11:18.950 "name": "BaseBdev2", 00:11:18.950 "uuid": "2cf2e3e1-a18f-5716-a7f5-78abafaad721", 00:11:18.950 "is_configured": true, 00:11:18.950 "data_offset": 2048, 00:11:18.950 "data_size": 63488 00:11:18.950 }, 00:11:18.950 { 00:11:18.950 "name": "BaseBdev3", 00:11:18.950 "uuid": "00ad16a5-06cd-5301-b042-395fc8610b02", 00:11:18.950 "is_configured": true, 00:11:18.950 "data_offset": 2048, 00:11:18.950 "data_size": 63488 00:11:18.950 }, 00:11:18.950 { 00:11:18.950 "name": "BaseBdev4", 00:11:18.950 "uuid": "2d46ed4b-8a41-513f-a283-504c1237357d", 00:11:18.950 "is_configured": true, 00:11:18.950 "data_offset": 2048, 00:11:18.950 "data_size": 63488 00:11:18.950 } 00:11:18.950 ] 00:11:18.950 }' 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.950 09:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.209 09:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.210 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.210 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.210 [2024-12-12 09:24:53.230459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.210 [2024-12-12 09:24:53.230566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.469 [2024-12-12 09:24:53.233399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.469 [2024-12-12 09:24:53.233529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.469 [2024-12-12 09:24:53.233599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.469 [2024-12-12 09:24:53.233651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.469 { 00:11:19.469 "results": [ 00:11:19.469 { 00:11:19.469 "job": "raid_bdev1", 00:11:19.469 "core_mask": "0x1", 00:11:19.469 "workload": "randrw", 00:11:19.469 "percentage": 50, 00:11:19.469 "status": "finished", 00:11:19.469 "queue_depth": 1, 00:11:19.469 "io_size": 131072, 00:11:19.469 "runtime": 1.397072, 00:11:19.469 "iops": 13714.39696737176, 00:11:19.469 "mibps": 1714.29962092147, 00:11:19.469 "io_failed": 1, 00:11:19.469 "io_timeout": 0, 00:11:19.469 "avg_latency_us": 102.58714287049135, 00:11:19.469 "min_latency_us": 25.2646288209607, 00:11:19.469 "max_latency_us": 1309.2890829694322 00:11:19.469 } 00:11:19.469 ], 00:11:19.469 "core_count": 1 00:11:19.469 } 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74176 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74176 ']' 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74176 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74176 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74176' 00:11:19.469 killing process with pid 74176 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74176 00:11:19.469 [2024-12-12 09:24:53.276987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.469 09:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74176 00:11:19.729 [2024-12-12 09:24:53.617838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.109 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7YFAg9fnry 00:11:21.109 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.110 ************************************ 00:11:21.110 END TEST raid_write_error_test 00:11:21.110 ************************************ 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:21.110 00:11:21.110 real 0m4.846s 00:11:21.110 user 0m5.581s 00:11:21.110 sys 0m0.730s 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.110 09:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.110 09:24:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.110 09:24:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:21.110 09:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.110 09:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.110 09:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.110 ************************************ 00:11:21.110 START TEST raid_state_function_test 00:11:21.110 ************************************ 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74319 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74319' 00:11:21.110 Process raid pid: 74319 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74319 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74319 ']' 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.110 09:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.110 [2024-12-12 09:24:55.048139] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:21.110 [2024-12-12 09:24:55.048317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.369 [2024-12-12 09:24:55.226637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.369 [2024-12-12 09:24:55.361511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.628 [2024-12-12 09:24:55.594010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.628 [2024-12-12 09:24:55.594137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.888 [2024-12-12 09:24:55.873951] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.888 [2024-12-12 09:24:55.874024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.888 [2024-12-12 09:24:55.874034] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.888 [2024-12-12 09:24:55.874059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.888 [2024-12-12 09:24:55.874070] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.888 [2024-12-12 09:24:55.874080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.888 [2024-12-12 09:24:55.874086] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.888 [2024-12-12 09:24:55.874094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.888 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.148 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.148 "name": "Existed_Raid", 00:11:22.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.148 "strip_size_kb": 0, 00:11:22.148 "state": "configuring", 00:11:22.148 "raid_level": "raid1", 00:11:22.148 "superblock": false, 00:11:22.148 "num_base_bdevs": 4, 00:11:22.148 "num_base_bdevs_discovered": 0, 00:11:22.148 "num_base_bdevs_operational": 4, 00:11:22.148 "base_bdevs_list": [ 00:11:22.148 { 00:11:22.148 "name": "BaseBdev1", 00:11:22.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.148 "is_configured": false, 00:11:22.148 "data_offset": 0, 00:11:22.148 "data_size": 0 00:11:22.148 }, 00:11:22.148 { 00:11:22.148 "name": "BaseBdev2", 00:11:22.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.148 "is_configured": false, 00:11:22.148 "data_offset": 0, 00:11:22.148 "data_size": 0 00:11:22.148 }, 00:11:22.148 { 00:11:22.148 "name": "BaseBdev3", 00:11:22.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.148 "is_configured": false, 00:11:22.148 "data_offset": 0, 00:11:22.148 "data_size": 0 00:11:22.148 }, 00:11:22.148 { 00:11:22.148 "name": "BaseBdev4", 00:11:22.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.148 "is_configured": false, 00:11:22.148 "data_offset": 0, 00:11:22.148 "data_size": 0 00:11:22.148 } 00:11:22.148 ] 00:11:22.148 }' 00:11:22.148 09:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.148 09:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 [2024-12-12 09:24:56.317130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.408 [2024-12-12 09:24:56.317229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 [2024-12-12 09:24:56.329104] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.408 [2024-12-12 09:24:56.329146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.408 [2024-12-12 09:24:56.329154] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.408 [2024-12-12 09:24:56.329179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.408 [2024-12-12 09:24:56.329185] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.408 [2024-12-12 09:24:56.329194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.408 [2024-12-12 09:24:56.329200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.408 [2024-12-12 09:24:56.329209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 [2024-12-12 09:24:56.383364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.408 BaseBdev1 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.408 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.409 [ 00:11:22.409 { 00:11:22.409 "name": "BaseBdev1", 00:11:22.409 "aliases": [ 00:11:22.409 "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda" 00:11:22.409 ], 00:11:22.409 "product_name": "Malloc disk", 00:11:22.409 "block_size": 512, 00:11:22.409 "num_blocks": 65536, 00:11:22.409 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:22.409 "assigned_rate_limits": { 00:11:22.409 "rw_ios_per_sec": 0, 00:11:22.409 "rw_mbytes_per_sec": 0, 00:11:22.409 "r_mbytes_per_sec": 0, 00:11:22.409 "w_mbytes_per_sec": 0 00:11:22.409 }, 00:11:22.409 "claimed": true, 00:11:22.409 "claim_type": "exclusive_write", 00:11:22.409 "zoned": false, 00:11:22.409 "supported_io_types": { 00:11:22.409 "read": true, 00:11:22.409 "write": true, 00:11:22.409 "unmap": true, 00:11:22.409 "flush": true, 00:11:22.409 "reset": true, 00:11:22.409 "nvme_admin": false, 00:11:22.409 "nvme_io": false, 00:11:22.409 "nvme_io_md": false, 00:11:22.409 "write_zeroes": true, 00:11:22.409 "zcopy": true, 00:11:22.409 "get_zone_info": false, 00:11:22.409 "zone_management": false, 00:11:22.409 "zone_append": false, 00:11:22.409 "compare": false, 00:11:22.409 "compare_and_write": false, 00:11:22.409 "abort": true, 00:11:22.409 "seek_hole": false, 00:11:22.409 "seek_data": false, 00:11:22.409 "copy": true, 00:11:22.409 "nvme_iov_md": false 00:11:22.409 }, 00:11:22.409 "memory_domains": [ 00:11:22.409 { 00:11:22.409 "dma_device_id": "system", 00:11:22.409 "dma_device_type": 1 00:11:22.409 }, 00:11:22.409 { 00:11:22.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.409 "dma_device_type": 2 00:11:22.409 } 00:11:22.409 ], 00:11:22.409 "driver_specific": {} 00:11:22.409 } 00:11:22.409 ] 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.409 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.669 "name": "Existed_Raid", 00:11:22.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.669 "strip_size_kb": 0, 00:11:22.669 "state": "configuring", 00:11:22.669 "raid_level": "raid1", 00:11:22.669 "superblock": false, 00:11:22.669 "num_base_bdevs": 4, 00:11:22.669 "num_base_bdevs_discovered": 1, 00:11:22.669 "num_base_bdevs_operational": 4, 00:11:22.669 "base_bdevs_list": [ 00:11:22.669 { 00:11:22.669 "name": "BaseBdev1", 00:11:22.669 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:22.669 "is_configured": true, 00:11:22.669 "data_offset": 0, 00:11:22.669 "data_size": 65536 00:11:22.669 }, 00:11:22.669 { 00:11:22.669 "name": "BaseBdev2", 00:11:22.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.669 "is_configured": false, 00:11:22.669 "data_offset": 0, 00:11:22.669 "data_size": 0 00:11:22.669 }, 00:11:22.669 { 00:11:22.669 "name": "BaseBdev3", 00:11:22.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.669 "is_configured": false, 00:11:22.669 "data_offset": 0, 00:11:22.669 "data_size": 0 00:11:22.669 }, 00:11:22.669 { 00:11:22.669 "name": "BaseBdev4", 00:11:22.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.669 "is_configured": false, 00:11:22.669 "data_offset": 0, 00:11:22.669 "data_size": 0 00:11:22.669 } 00:11:22.669 ] 00:11:22.669 }' 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.669 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.929 [2024-12-12 09:24:56.862626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.929 [2024-12-12 09:24:56.862693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.929 [2024-12-12 09:24:56.874656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.929 [2024-12-12 09:24:56.876804] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.929 [2024-12-12 09:24:56.876903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.929 [2024-12-12 09:24:56.876933] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.929 [2024-12-12 09:24:56.876957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.929 [2024-12-12 09:24:56.876986] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.929 [2024-12-12 09:24:56.877008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.929 "name": "Existed_Raid", 00:11:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.929 "strip_size_kb": 0, 00:11:22.929 "state": "configuring", 00:11:22.929 "raid_level": "raid1", 00:11:22.929 "superblock": false, 00:11:22.929 "num_base_bdevs": 4, 00:11:22.929 "num_base_bdevs_discovered": 1, 00:11:22.929 "num_base_bdevs_operational": 4, 00:11:22.929 "base_bdevs_list": [ 00:11:22.929 { 00:11:22.929 "name": "BaseBdev1", 00:11:22.929 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:22.929 "is_configured": true, 00:11:22.929 "data_offset": 0, 00:11:22.929 "data_size": 65536 00:11:22.929 }, 00:11:22.929 { 00:11:22.929 "name": "BaseBdev2", 00:11:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.929 "is_configured": false, 00:11:22.929 "data_offset": 0, 00:11:22.929 "data_size": 0 00:11:22.929 }, 00:11:22.929 { 00:11:22.929 "name": "BaseBdev3", 00:11:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.929 "is_configured": false, 00:11:22.929 "data_offset": 0, 00:11:22.929 "data_size": 0 00:11:22.929 }, 00:11:22.929 { 00:11:22.929 "name": "BaseBdev4", 00:11:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.929 "is_configured": false, 00:11:22.929 "data_offset": 0, 00:11:22.929 "data_size": 0 00:11:22.929 } 00:11:22.929 ] 00:11:22.929 }' 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.929 09:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 [2024-12-12 09:24:57.353461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.501 BaseBdev2 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 [ 00:11:23.501 { 00:11:23.501 "name": "BaseBdev2", 00:11:23.501 "aliases": [ 00:11:23.501 "69ce38a3-ed44-493d-85ed-48b6d884c591" 00:11:23.501 ], 00:11:23.501 "product_name": "Malloc disk", 00:11:23.501 "block_size": 512, 00:11:23.501 "num_blocks": 65536, 00:11:23.501 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:23.501 "assigned_rate_limits": { 00:11:23.501 "rw_ios_per_sec": 0, 00:11:23.501 "rw_mbytes_per_sec": 0, 00:11:23.501 "r_mbytes_per_sec": 0, 00:11:23.501 "w_mbytes_per_sec": 0 00:11:23.501 }, 00:11:23.501 "claimed": true, 00:11:23.501 "claim_type": "exclusive_write", 00:11:23.501 "zoned": false, 00:11:23.501 "supported_io_types": { 00:11:23.501 "read": true, 00:11:23.501 "write": true, 00:11:23.501 "unmap": true, 00:11:23.501 "flush": true, 00:11:23.501 "reset": true, 00:11:23.501 "nvme_admin": false, 00:11:23.501 "nvme_io": false, 00:11:23.501 "nvme_io_md": false, 00:11:23.501 "write_zeroes": true, 00:11:23.501 "zcopy": true, 00:11:23.501 "get_zone_info": false, 00:11:23.501 "zone_management": false, 00:11:23.501 "zone_append": false, 00:11:23.501 "compare": false, 00:11:23.501 "compare_and_write": false, 00:11:23.501 "abort": true, 00:11:23.501 "seek_hole": false, 00:11:23.501 "seek_data": false, 00:11:23.501 "copy": true, 00:11:23.501 "nvme_iov_md": false 00:11:23.501 }, 00:11:23.501 "memory_domains": [ 00:11:23.501 { 00:11:23.501 "dma_device_id": "system", 00:11:23.501 "dma_device_type": 1 00:11:23.501 }, 00:11:23.501 { 00:11:23.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.501 "dma_device_type": 2 00:11:23.501 } 00:11:23.501 ], 00:11:23.501 "driver_specific": {} 00:11:23.501 } 00:11:23.501 ] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.501 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.501 "name": "Existed_Raid", 00:11:23.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.502 "strip_size_kb": 0, 00:11:23.502 "state": "configuring", 00:11:23.502 "raid_level": "raid1", 00:11:23.502 "superblock": false, 00:11:23.502 "num_base_bdevs": 4, 00:11:23.502 "num_base_bdevs_discovered": 2, 00:11:23.502 "num_base_bdevs_operational": 4, 00:11:23.502 "base_bdevs_list": [ 00:11:23.502 { 00:11:23.502 "name": "BaseBdev1", 00:11:23.502 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:23.502 "is_configured": true, 00:11:23.502 "data_offset": 0, 00:11:23.502 "data_size": 65536 00:11:23.502 }, 00:11:23.502 { 00:11:23.502 "name": "BaseBdev2", 00:11:23.502 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:23.502 "is_configured": true, 00:11:23.502 "data_offset": 0, 00:11:23.502 "data_size": 65536 00:11:23.502 }, 00:11:23.502 { 00:11:23.502 "name": "BaseBdev3", 00:11:23.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.502 "is_configured": false, 00:11:23.502 "data_offset": 0, 00:11:23.502 "data_size": 0 00:11:23.502 }, 00:11:23.502 { 00:11:23.502 "name": "BaseBdev4", 00:11:23.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.502 "is_configured": false, 00:11:23.502 "data_offset": 0, 00:11:23.502 "data_size": 0 00:11:23.502 } 00:11:23.502 ] 00:11:23.502 }' 00:11:23.502 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.502 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 [2024-12-12 09:24:57.888374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.088 BaseBdev3 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 [ 00:11:24.088 { 00:11:24.088 "name": "BaseBdev3", 00:11:24.088 "aliases": [ 00:11:24.088 "d75765dc-53b4-4d12-bd7d-dd136428c32f" 00:11:24.088 ], 00:11:24.088 "product_name": "Malloc disk", 00:11:24.088 "block_size": 512, 00:11:24.088 "num_blocks": 65536, 00:11:24.088 "uuid": "d75765dc-53b4-4d12-bd7d-dd136428c32f", 00:11:24.088 "assigned_rate_limits": { 00:11:24.088 "rw_ios_per_sec": 0, 00:11:24.088 "rw_mbytes_per_sec": 0, 00:11:24.088 "r_mbytes_per_sec": 0, 00:11:24.088 "w_mbytes_per_sec": 0 00:11:24.088 }, 00:11:24.088 "claimed": true, 00:11:24.088 "claim_type": "exclusive_write", 00:11:24.088 "zoned": false, 00:11:24.088 "supported_io_types": { 00:11:24.088 "read": true, 00:11:24.088 "write": true, 00:11:24.088 "unmap": true, 00:11:24.088 "flush": true, 00:11:24.088 "reset": true, 00:11:24.088 "nvme_admin": false, 00:11:24.088 "nvme_io": false, 00:11:24.088 "nvme_io_md": false, 00:11:24.088 "write_zeroes": true, 00:11:24.088 "zcopy": true, 00:11:24.088 "get_zone_info": false, 00:11:24.088 "zone_management": false, 00:11:24.088 "zone_append": false, 00:11:24.088 "compare": false, 00:11:24.088 "compare_and_write": false, 00:11:24.088 "abort": true, 00:11:24.088 "seek_hole": false, 00:11:24.088 "seek_data": false, 00:11:24.088 "copy": true, 00:11:24.088 "nvme_iov_md": false 00:11:24.088 }, 00:11:24.088 "memory_domains": [ 00:11:24.088 { 00:11:24.088 "dma_device_id": "system", 00:11:24.088 "dma_device_type": 1 00:11:24.088 }, 00:11:24.088 { 00:11:24.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.088 "dma_device_type": 2 00:11:24.088 } 00:11:24.088 ], 00:11:24.088 "driver_specific": {} 00:11:24.088 } 00:11:24.088 ] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.088 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.088 "name": "Existed_Raid", 00:11:24.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.089 "strip_size_kb": 0, 00:11:24.089 "state": "configuring", 00:11:24.089 "raid_level": "raid1", 00:11:24.089 "superblock": false, 00:11:24.089 "num_base_bdevs": 4, 00:11:24.089 "num_base_bdevs_discovered": 3, 00:11:24.089 "num_base_bdevs_operational": 4, 00:11:24.089 "base_bdevs_list": [ 00:11:24.089 { 00:11:24.089 "name": "BaseBdev1", 00:11:24.089 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:24.089 "is_configured": true, 00:11:24.089 "data_offset": 0, 00:11:24.089 "data_size": 65536 00:11:24.089 }, 00:11:24.089 { 00:11:24.089 "name": "BaseBdev2", 00:11:24.089 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:24.089 "is_configured": true, 00:11:24.089 "data_offset": 0, 00:11:24.089 "data_size": 65536 00:11:24.089 }, 00:11:24.089 { 00:11:24.089 "name": "BaseBdev3", 00:11:24.089 "uuid": "d75765dc-53b4-4d12-bd7d-dd136428c32f", 00:11:24.089 "is_configured": true, 00:11:24.089 "data_offset": 0, 00:11:24.089 "data_size": 65536 00:11:24.089 }, 00:11:24.089 { 00:11:24.089 "name": "BaseBdev4", 00:11:24.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.089 "is_configured": false, 00:11:24.089 "data_offset": 0, 00:11:24.089 "data_size": 0 00:11:24.089 } 00:11:24.089 ] 00:11:24.089 }' 00:11:24.089 09:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.089 09:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.359 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.359 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.359 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.619 [2024-12-12 09:24:58.419137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.619 [2024-12-12 09:24:58.419279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.619 [2024-12-12 09:24:58.419306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:24.619 [2024-12-12 09:24:58.419681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.619 [2024-12-12 09:24:58.419927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.619 [2024-12-12 09:24:58.419946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.619 [2024-12-12 09:24:58.420261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.619 BaseBdev4 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.619 [ 00:11:24.619 { 00:11:24.619 "name": "BaseBdev4", 00:11:24.619 "aliases": [ 00:11:24.619 "e3075c4f-42ee-4b17-85af-36e9e4601139" 00:11:24.619 ], 00:11:24.619 "product_name": "Malloc disk", 00:11:24.619 "block_size": 512, 00:11:24.619 "num_blocks": 65536, 00:11:24.619 "uuid": "e3075c4f-42ee-4b17-85af-36e9e4601139", 00:11:24.619 "assigned_rate_limits": { 00:11:24.619 "rw_ios_per_sec": 0, 00:11:24.619 "rw_mbytes_per_sec": 0, 00:11:24.619 "r_mbytes_per_sec": 0, 00:11:24.619 "w_mbytes_per_sec": 0 00:11:24.619 }, 00:11:24.619 "claimed": true, 00:11:24.619 "claim_type": "exclusive_write", 00:11:24.619 "zoned": false, 00:11:24.619 "supported_io_types": { 00:11:24.619 "read": true, 00:11:24.619 "write": true, 00:11:24.619 "unmap": true, 00:11:24.619 "flush": true, 00:11:24.619 "reset": true, 00:11:24.619 "nvme_admin": false, 00:11:24.619 "nvme_io": false, 00:11:24.619 "nvme_io_md": false, 00:11:24.619 "write_zeroes": true, 00:11:24.619 "zcopy": true, 00:11:24.619 "get_zone_info": false, 00:11:24.619 "zone_management": false, 00:11:24.619 "zone_append": false, 00:11:24.619 "compare": false, 00:11:24.619 "compare_and_write": false, 00:11:24.619 "abort": true, 00:11:24.619 "seek_hole": false, 00:11:24.619 "seek_data": false, 00:11:24.619 "copy": true, 00:11:24.619 "nvme_iov_md": false 00:11:24.619 }, 00:11:24.619 "memory_domains": [ 00:11:24.619 { 00:11:24.619 "dma_device_id": "system", 00:11:24.619 "dma_device_type": 1 00:11:24.619 }, 00:11:24.619 { 00:11:24.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.619 "dma_device_type": 2 00:11:24.619 } 00:11:24.619 ], 00:11:24.619 "driver_specific": {} 00:11:24.619 } 00:11:24.619 ] 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.619 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.620 "name": "Existed_Raid", 00:11:24.620 "uuid": "ec25a6a5-1d89-4bca-829a-0fdd3b17af60", 00:11:24.620 "strip_size_kb": 0, 00:11:24.620 "state": "online", 00:11:24.620 "raid_level": "raid1", 00:11:24.620 "superblock": false, 00:11:24.620 "num_base_bdevs": 4, 00:11:24.620 "num_base_bdevs_discovered": 4, 00:11:24.620 "num_base_bdevs_operational": 4, 00:11:24.620 "base_bdevs_list": [ 00:11:24.620 { 00:11:24.620 "name": "BaseBdev1", 00:11:24.620 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:24.620 "is_configured": true, 00:11:24.620 "data_offset": 0, 00:11:24.620 "data_size": 65536 00:11:24.620 }, 00:11:24.620 { 00:11:24.620 "name": "BaseBdev2", 00:11:24.620 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:24.620 "is_configured": true, 00:11:24.620 "data_offset": 0, 00:11:24.620 "data_size": 65536 00:11:24.620 }, 00:11:24.620 { 00:11:24.620 "name": "BaseBdev3", 00:11:24.620 "uuid": "d75765dc-53b4-4d12-bd7d-dd136428c32f", 00:11:24.620 "is_configured": true, 00:11:24.620 "data_offset": 0, 00:11:24.620 "data_size": 65536 00:11:24.620 }, 00:11:24.620 { 00:11:24.620 "name": "BaseBdev4", 00:11:24.620 "uuid": "e3075c4f-42ee-4b17-85af-36e9e4601139", 00:11:24.620 "is_configured": true, 00:11:24.620 "data_offset": 0, 00:11:24.620 "data_size": 65536 00:11:24.620 } 00:11:24.620 ] 00:11:24.620 }' 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.620 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.880 [2024-12-12 09:24:58.842741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.880 "name": "Existed_Raid", 00:11:24.880 "aliases": [ 00:11:24.880 "ec25a6a5-1d89-4bca-829a-0fdd3b17af60" 00:11:24.880 ], 00:11:24.880 "product_name": "Raid Volume", 00:11:24.880 "block_size": 512, 00:11:24.880 "num_blocks": 65536, 00:11:24.880 "uuid": "ec25a6a5-1d89-4bca-829a-0fdd3b17af60", 00:11:24.880 "assigned_rate_limits": { 00:11:24.880 "rw_ios_per_sec": 0, 00:11:24.880 "rw_mbytes_per_sec": 0, 00:11:24.880 "r_mbytes_per_sec": 0, 00:11:24.880 "w_mbytes_per_sec": 0 00:11:24.880 }, 00:11:24.880 "claimed": false, 00:11:24.880 "zoned": false, 00:11:24.880 "supported_io_types": { 00:11:24.880 "read": true, 00:11:24.880 "write": true, 00:11:24.880 "unmap": false, 00:11:24.880 "flush": false, 00:11:24.880 "reset": true, 00:11:24.880 "nvme_admin": false, 00:11:24.880 "nvme_io": false, 00:11:24.880 "nvme_io_md": false, 00:11:24.880 "write_zeroes": true, 00:11:24.880 "zcopy": false, 00:11:24.880 "get_zone_info": false, 00:11:24.880 "zone_management": false, 00:11:24.880 "zone_append": false, 00:11:24.880 "compare": false, 00:11:24.880 "compare_and_write": false, 00:11:24.880 "abort": false, 00:11:24.880 "seek_hole": false, 00:11:24.880 "seek_data": false, 00:11:24.880 "copy": false, 00:11:24.880 "nvme_iov_md": false 00:11:24.880 }, 00:11:24.880 "memory_domains": [ 00:11:24.880 { 00:11:24.880 "dma_device_id": "system", 00:11:24.880 "dma_device_type": 1 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.880 "dma_device_type": 2 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "system", 00:11:24.880 "dma_device_type": 1 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.880 "dma_device_type": 2 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "system", 00:11:24.880 "dma_device_type": 1 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.880 "dma_device_type": 2 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "system", 00:11:24.880 "dma_device_type": 1 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.880 "dma_device_type": 2 00:11:24.880 } 00:11:24.880 ], 00:11:24.880 "driver_specific": { 00:11:24.880 "raid": { 00:11:24.880 "uuid": "ec25a6a5-1d89-4bca-829a-0fdd3b17af60", 00:11:24.880 "strip_size_kb": 0, 00:11:24.880 "state": "online", 00:11:24.880 "raid_level": "raid1", 00:11:24.880 "superblock": false, 00:11:24.880 "num_base_bdevs": 4, 00:11:24.880 "num_base_bdevs_discovered": 4, 00:11:24.880 "num_base_bdevs_operational": 4, 00:11:24.880 "base_bdevs_list": [ 00:11:24.880 { 00:11:24.880 "name": "BaseBdev1", 00:11:24.880 "uuid": "f78d8be7-a87d-4d93-b79e-5a7afc6e3cda", 00:11:24.880 "is_configured": true, 00:11:24.880 "data_offset": 0, 00:11:24.880 "data_size": 65536 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "name": "BaseBdev2", 00:11:24.880 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:24.880 "is_configured": true, 00:11:24.880 "data_offset": 0, 00:11:24.880 "data_size": 65536 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "name": "BaseBdev3", 00:11:24.880 "uuid": "d75765dc-53b4-4d12-bd7d-dd136428c32f", 00:11:24.880 "is_configured": true, 00:11:24.880 "data_offset": 0, 00:11:24.880 "data_size": 65536 00:11:24.880 }, 00:11:24.880 { 00:11:24.880 "name": "BaseBdev4", 00:11:24.880 "uuid": "e3075c4f-42ee-4b17-85af-36e9e4601139", 00:11:24.880 "is_configured": true, 00:11:24.880 "data_offset": 0, 00:11:24.880 "data_size": 65536 00:11:24.880 } 00:11:24.880 ] 00:11:24.880 } 00:11:24.880 } 00:11:24.880 }' 00:11:24.880 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:25.140 BaseBdev2 00:11:25.140 BaseBdev3 00:11:25.140 BaseBdev4' 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 09:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.401 [2024-12-12 09:24:59.177913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.401 "name": "Existed_Raid", 00:11:25.401 "uuid": "ec25a6a5-1d89-4bca-829a-0fdd3b17af60", 00:11:25.401 "strip_size_kb": 0, 00:11:25.401 "state": "online", 00:11:25.401 "raid_level": "raid1", 00:11:25.401 "superblock": false, 00:11:25.401 "num_base_bdevs": 4, 00:11:25.401 "num_base_bdevs_discovered": 3, 00:11:25.401 "num_base_bdevs_operational": 3, 00:11:25.401 "base_bdevs_list": [ 00:11:25.401 { 00:11:25.401 "name": null, 00:11:25.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.401 "is_configured": false, 00:11:25.401 "data_offset": 0, 00:11:25.401 "data_size": 65536 00:11:25.401 }, 00:11:25.401 { 00:11:25.401 "name": "BaseBdev2", 00:11:25.401 "uuid": "69ce38a3-ed44-493d-85ed-48b6d884c591", 00:11:25.401 "is_configured": true, 00:11:25.401 "data_offset": 0, 00:11:25.401 "data_size": 65536 00:11:25.401 }, 00:11:25.401 { 00:11:25.401 "name": "BaseBdev3", 00:11:25.401 "uuid": "d75765dc-53b4-4d12-bd7d-dd136428c32f", 00:11:25.401 "is_configured": true, 00:11:25.401 "data_offset": 0, 00:11:25.401 "data_size": 65536 00:11:25.401 }, 00:11:25.401 { 00:11:25.401 "name": "BaseBdev4", 00:11:25.401 "uuid": "e3075c4f-42ee-4b17-85af-36e9e4601139", 00:11:25.401 "is_configured": true, 00:11:25.401 "data_offset": 0, 00:11:25.401 "data_size": 65536 00:11:25.401 } 00:11:25.401 ] 00:11:25.401 }' 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.401 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.661 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.920 [2024-12-12 09:24:59.725449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.920 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.920 [2024-12-12 09:24:59.883193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.181 09:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.181 [2024-12-12 09:25:00.041464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:26.181 [2024-12-12 09:25:00.041655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.181 [2024-12-12 09:25:00.147762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.181 [2024-12-12 09:25:00.147886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.181 [2024-12-12 09:25:00.147931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.181 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.182 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.442 BaseBdev2 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.442 [ 00:11:26.442 { 00:11:26.442 "name": "BaseBdev2", 00:11:26.442 "aliases": [ 00:11:26.442 "0bf5bb34-d100-4835-9439-b0d7f534bed5" 00:11:26.442 ], 00:11:26.442 "product_name": "Malloc disk", 00:11:26.442 "block_size": 512, 00:11:26.442 "num_blocks": 65536, 00:11:26.442 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:26.442 "assigned_rate_limits": { 00:11:26.442 "rw_ios_per_sec": 0, 00:11:26.442 "rw_mbytes_per_sec": 0, 00:11:26.442 "r_mbytes_per_sec": 0, 00:11:26.442 "w_mbytes_per_sec": 0 00:11:26.442 }, 00:11:26.442 "claimed": false, 00:11:26.442 "zoned": false, 00:11:26.442 "supported_io_types": { 00:11:26.442 "read": true, 00:11:26.442 "write": true, 00:11:26.442 "unmap": true, 00:11:26.442 "flush": true, 00:11:26.442 "reset": true, 00:11:26.442 "nvme_admin": false, 00:11:26.442 "nvme_io": false, 00:11:26.442 "nvme_io_md": false, 00:11:26.442 "write_zeroes": true, 00:11:26.442 "zcopy": true, 00:11:26.442 "get_zone_info": false, 00:11:26.442 "zone_management": false, 00:11:26.442 "zone_append": false, 00:11:26.442 "compare": false, 00:11:26.442 "compare_and_write": false, 00:11:26.442 "abort": true, 00:11:26.442 "seek_hole": false, 00:11:26.442 "seek_data": false, 00:11:26.442 "copy": true, 00:11:26.442 "nvme_iov_md": false 00:11:26.442 }, 00:11:26.442 "memory_domains": [ 00:11:26.442 { 00:11:26.442 "dma_device_id": "system", 00:11:26.442 "dma_device_type": 1 00:11:26.442 }, 00:11:26.442 { 00:11:26.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.442 "dma_device_type": 2 00:11:26.442 } 00:11:26.442 ], 00:11:26.442 "driver_specific": {} 00:11:26.442 } 00:11:26.442 ] 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.442 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 BaseBdev3 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 [ 00:11:26.443 { 00:11:26.443 "name": "BaseBdev3", 00:11:26.443 "aliases": [ 00:11:26.443 "3da67f03-05b8-4b79-91e4-55d5e41e83cc" 00:11:26.443 ], 00:11:26.443 "product_name": "Malloc disk", 00:11:26.443 "block_size": 512, 00:11:26.443 "num_blocks": 65536, 00:11:26.443 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:26.443 "assigned_rate_limits": { 00:11:26.443 "rw_ios_per_sec": 0, 00:11:26.443 "rw_mbytes_per_sec": 0, 00:11:26.443 "r_mbytes_per_sec": 0, 00:11:26.443 "w_mbytes_per_sec": 0 00:11:26.443 }, 00:11:26.443 "claimed": false, 00:11:26.443 "zoned": false, 00:11:26.443 "supported_io_types": { 00:11:26.443 "read": true, 00:11:26.443 "write": true, 00:11:26.443 "unmap": true, 00:11:26.443 "flush": true, 00:11:26.443 "reset": true, 00:11:26.443 "nvme_admin": false, 00:11:26.443 "nvme_io": false, 00:11:26.443 "nvme_io_md": false, 00:11:26.443 "write_zeroes": true, 00:11:26.443 "zcopy": true, 00:11:26.443 "get_zone_info": false, 00:11:26.443 "zone_management": false, 00:11:26.443 "zone_append": false, 00:11:26.443 "compare": false, 00:11:26.443 "compare_and_write": false, 00:11:26.443 "abort": true, 00:11:26.443 "seek_hole": false, 00:11:26.443 "seek_data": false, 00:11:26.443 "copy": true, 00:11:26.443 "nvme_iov_md": false 00:11:26.443 }, 00:11:26.443 "memory_domains": [ 00:11:26.443 { 00:11:26.443 "dma_device_id": "system", 00:11:26.443 "dma_device_type": 1 00:11:26.443 }, 00:11:26.443 { 00:11:26.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.443 "dma_device_type": 2 00:11:26.443 } 00:11:26.443 ], 00:11:26.443 "driver_specific": {} 00:11:26.443 } 00:11:26.443 ] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 BaseBdev4 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 [ 00:11:26.443 { 00:11:26.443 "name": "BaseBdev4", 00:11:26.443 "aliases": [ 00:11:26.443 "f5cf86b5-cc31-408f-8ae7-41f0c968a896" 00:11:26.443 ], 00:11:26.443 "product_name": "Malloc disk", 00:11:26.443 "block_size": 512, 00:11:26.443 "num_blocks": 65536, 00:11:26.443 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:26.443 "assigned_rate_limits": { 00:11:26.443 "rw_ios_per_sec": 0, 00:11:26.443 "rw_mbytes_per_sec": 0, 00:11:26.443 "r_mbytes_per_sec": 0, 00:11:26.443 "w_mbytes_per_sec": 0 00:11:26.443 }, 00:11:26.443 "claimed": false, 00:11:26.443 "zoned": false, 00:11:26.443 "supported_io_types": { 00:11:26.443 "read": true, 00:11:26.443 "write": true, 00:11:26.443 "unmap": true, 00:11:26.443 "flush": true, 00:11:26.443 "reset": true, 00:11:26.443 "nvme_admin": false, 00:11:26.443 "nvme_io": false, 00:11:26.443 "nvme_io_md": false, 00:11:26.443 "write_zeroes": true, 00:11:26.443 "zcopy": true, 00:11:26.443 "get_zone_info": false, 00:11:26.443 "zone_management": false, 00:11:26.443 "zone_append": false, 00:11:26.443 "compare": false, 00:11:26.443 "compare_and_write": false, 00:11:26.443 "abort": true, 00:11:26.443 "seek_hole": false, 00:11:26.443 "seek_data": false, 00:11:26.443 "copy": true, 00:11:26.443 "nvme_iov_md": false 00:11:26.443 }, 00:11:26.443 "memory_domains": [ 00:11:26.443 { 00:11:26.443 "dma_device_id": "system", 00:11:26.443 "dma_device_type": 1 00:11:26.443 }, 00:11:26.443 { 00:11:26.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.443 "dma_device_type": 2 00:11:26.443 } 00:11:26.443 ], 00:11:26.443 "driver_specific": {} 00:11:26.443 } 00:11:26.443 ] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.443 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.443 [2024-12-12 09:25:00.462718] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.443 [2024-12-12 09:25:00.462851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.443 [2024-12-12 09:25:00.462894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.703 [2024-12-12 09:25:00.465095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.703 [2024-12-12 09:25:00.465205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.703 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.704 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.704 "name": "Existed_Raid", 00:11:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.704 "strip_size_kb": 0, 00:11:26.704 "state": "configuring", 00:11:26.704 "raid_level": "raid1", 00:11:26.704 "superblock": false, 00:11:26.704 "num_base_bdevs": 4, 00:11:26.704 "num_base_bdevs_discovered": 3, 00:11:26.704 "num_base_bdevs_operational": 4, 00:11:26.704 "base_bdevs_list": [ 00:11:26.704 { 00:11:26.704 "name": "BaseBdev1", 00:11:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.704 "is_configured": false, 00:11:26.704 "data_offset": 0, 00:11:26.704 "data_size": 0 00:11:26.704 }, 00:11:26.704 { 00:11:26.704 "name": "BaseBdev2", 00:11:26.704 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:26.704 "is_configured": true, 00:11:26.704 "data_offset": 0, 00:11:26.704 "data_size": 65536 00:11:26.704 }, 00:11:26.704 { 00:11:26.704 "name": "BaseBdev3", 00:11:26.704 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:26.704 "is_configured": true, 00:11:26.704 "data_offset": 0, 00:11:26.704 "data_size": 65536 00:11:26.704 }, 00:11:26.704 { 00:11:26.704 "name": "BaseBdev4", 00:11:26.704 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:26.704 "is_configured": true, 00:11:26.704 "data_offset": 0, 00:11:26.704 "data_size": 65536 00:11:26.704 } 00:11:26.704 ] 00:11:26.704 }' 00:11:26.704 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.704 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.964 [2024-12-12 09:25:00.862022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.964 "name": "Existed_Raid", 00:11:26.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.964 "strip_size_kb": 0, 00:11:26.964 "state": "configuring", 00:11:26.964 "raid_level": "raid1", 00:11:26.964 "superblock": false, 00:11:26.964 "num_base_bdevs": 4, 00:11:26.964 "num_base_bdevs_discovered": 2, 00:11:26.964 "num_base_bdevs_operational": 4, 00:11:26.964 "base_bdevs_list": [ 00:11:26.964 { 00:11:26.964 "name": "BaseBdev1", 00:11:26.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.964 "is_configured": false, 00:11:26.964 "data_offset": 0, 00:11:26.964 "data_size": 0 00:11:26.964 }, 00:11:26.964 { 00:11:26.964 "name": null, 00:11:26.964 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:26.964 "is_configured": false, 00:11:26.964 "data_offset": 0, 00:11:26.964 "data_size": 65536 00:11:26.964 }, 00:11:26.964 { 00:11:26.964 "name": "BaseBdev3", 00:11:26.964 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:26.964 "is_configured": true, 00:11:26.964 "data_offset": 0, 00:11:26.964 "data_size": 65536 00:11:26.964 }, 00:11:26.964 { 00:11:26.964 "name": "BaseBdev4", 00:11:26.964 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:26.964 "is_configured": true, 00:11:26.964 "data_offset": 0, 00:11:26.964 "data_size": 65536 00:11:26.964 } 00:11:26.964 ] 00:11:26.964 }' 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.964 09:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 [2024-12-12 09:25:01.397825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.534 BaseBdev1 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 [ 00:11:27.534 { 00:11:27.534 "name": "BaseBdev1", 00:11:27.534 "aliases": [ 00:11:27.534 "e6bfec6f-de03-45bc-a464-0c9e76e01e86" 00:11:27.534 ], 00:11:27.534 "product_name": "Malloc disk", 00:11:27.534 "block_size": 512, 00:11:27.534 "num_blocks": 65536, 00:11:27.534 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:27.534 "assigned_rate_limits": { 00:11:27.534 "rw_ios_per_sec": 0, 00:11:27.534 "rw_mbytes_per_sec": 0, 00:11:27.534 "r_mbytes_per_sec": 0, 00:11:27.534 "w_mbytes_per_sec": 0 00:11:27.534 }, 00:11:27.534 "claimed": true, 00:11:27.534 "claim_type": "exclusive_write", 00:11:27.534 "zoned": false, 00:11:27.534 "supported_io_types": { 00:11:27.534 "read": true, 00:11:27.534 "write": true, 00:11:27.534 "unmap": true, 00:11:27.534 "flush": true, 00:11:27.534 "reset": true, 00:11:27.534 "nvme_admin": false, 00:11:27.534 "nvme_io": false, 00:11:27.534 "nvme_io_md": false, 00:11:27.534 "write_zeroes": true, 00:11:27.534 "zcopy": true, 00:11:27.534 "get_zone_info": false, 00:11:27.534 "zone_management": false, 00:11:27.534 "zone_append": false, 00:11:27.534 "compare": false, 00:11:27.534 "compare_and_write": false, 00:11:27.534 "abort": true, 00:11:27.534 "seek_hole": false, 00:11:27.534 "seek_data": false, 00:11:27.534 "copy": true, 00:11:27.534 "nvme_iov_md": false 00:11:27.534 }, 00:11:27.534 "memory_domains": [ 00:11:27.534 { 00:11:27.534 "dma_device_id": "system", 00:11:27.534 "dma_device_type": 1 00:11:27.534 }, 00:11:27.534 { 00:11:27.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.534 "dma_device_type": 2 00:11:27.534 } 00:11:27.534 ], 00:11:27.534 "driver_specific": {} 00:11:27.534 } 00:11:27.534 ] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.534 "name": "Existed_Raid", 00:11:27.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.534 "strip_size_kb": 0, 00:11:27.534 "state": "configuring", 00:11:27.534 "raid_level": "raid1", 00:11:27.534 "superblock": false, 00:11:27.534 "num_base_bdevs": 4, 00:11:27.534 "num_base_bdevs_discovered": 3, 00:11:27.534 "num_base_bdevs_operational": 4, 00:11:27.534 "base_bdevs_list": [ 00:11:27.534 { 00:11:27.534 "name": "BaseBdev1", 00:11:27.534 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:27.534 "is_configured": true, 00:11:27.534 "data_offset": 0, 00:11:27.534 "data_size": 65536 00:11:27.534 }, 00:11:27.534 { 00:11:27.534 "name": null, 00:11:27.534 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:27.534 "is_configured": false, 00:11:27.534 "data_offset": 0, 00:11:27.534 "data_size": 65536 00:11:27.534 }, 00:11:27.534 { 00:11:27.534 "name": "BaseBdev3", 00:11:27.534 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:27.534 "is_configured": true, 00:11:27.534 "data_offset": 0, 00:11:27.534 "data_size": 65536 00:11:27.534 }, 00:11:27.534 { 00:11:27.534 "name": "BaseBdev4", 00:11:27.534 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:27.534 "is_configured": true, 00:11:27.534 "data_offset": 0, 00:11:27.534 "data_size": 65536 00:11:27.534 } 00:11:27.534 ] 00:11:27.534 }' 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.534 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 [2024-12-12 09:25:01.885043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.105 "name": "Existed_Raid", 00:11:28.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.105 "strip_size_kb": 0, 00:11:28.105 "state": "configuring", 00:11:28.105 "raid_level": "raid1", 00:11:28.105 "superblock": false, 00:11:28.105 "num_base_bdevs": 4, 00:11:28.105 "num_base_bdevs_discovered": 2, 00:11:28.105 "num_base_bdevs_operational": 4, 00:11:28.105 "base_bdevs_list": [ 00:11:28.105 { 00:11:28.105 "name": "BaseBdev1", 00:11:28.105 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:28.105 "is_configured": true, 00:11:28.105 "data_offset": 0, 00:11:28.105 "data_size": 65536 00:11:28.105 }, 00:11:28.105 { 00:11:28.105 "name": null, 00:11:28.105 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:28.105 "is_configured": false, 00:11:28.105 "data_offset": 0, 00:11:28.105 "data_size": 65536 00:11:28.105 }, 00:11:28.105 { 00:11:28.105 "name": null, 00:11:28.105 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:28.105 "is_configured": false, 00:11:28.105 "data_offset": 0, 00:11:28.105 "data_size": 65536 00:11:28.105 }, 00:11:28.105 { 00:11:28.105 "name": "BaseBdev4", 00:11:28.105 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:28.105 "is_configured": true, 00:11:28.105 "data_offset": 0, 00:11:28.105 "data_size": 65536 00:11:28.105 } 00:11:28.105 ] 00:11:28.105 }' 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.105 09:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.365 [2024-12-12 09:25:02.364205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.365 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.624 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.624 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.624 "name": "Existed_Raid", 00:11:28.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.624 "strip_size_kb": 0, 00:11:28.624 "state": "configuring", 00:11:28.624 "raid_level": "raid1", 00:11:28.624 "superblock": false, 00:11:28.624 "num_base_bdevs": 4, 00:11:28.624 "num_base_bdevs_discovered": 3, 00:11:28.624 "num_base_bdevs_operational": 4, 00:11:28.624 "base_bdevs_list": [ 00:11:28.624 { 00:11:28.624 "name": "BaseBdev1", 00:11:28.624 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:28.624 "is_configured": true, 00:11:28.624 "data_offset": 0, 00:11:28.624 "data_size": 65536 00:11:28.624 }, 00:11:28.624 { 00:11:28.624 "name": null, 00:11:28.624 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:28.624 "is_configured": false, 00:11:28.624 "data_offset": 0, 00:11:28.624 "data_size": 65536 00:11:28.624 }, 00:11:28.624 { 00:11:28.624 "name": "BaseBdev3", 00:11:28.624 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:28.624 "is_configured": true, 00:11:28.624 "data_offset": 0, 00:11:28.624 "data_size": 65536 00:11:28.624 }, 00:11:28.624 { 00:11:28.624 "name": "BaseBdev4", 00:11:28.624 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:28.624 "is_configured": true, 00:11:28.624 "data_offset": 0, 00:11:28.624 "data_size": 65536 00:11:28.624 } 00:11:28.624 ] 00:11:28.624 }' 00:11:28.624 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.624 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.884 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.884 [2024-12-12 09:25:02.859486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.143 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.143 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.144 09:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.144 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.144 "name": "Existed_Raid", 00:11:29.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.144 "strip_size_kb": 0, 00:11:29.144 "state": "configuring", 00:11:29.144 "raid_level": "raid1", 00:11:29.144 "superblock": false, 00:11:29.144 "num_base_bdevs": 4, 00:11:29.144 "num_base_bdevs_discovered": 2, 00:11:29.144 "num_base_bdevs_operational": 4, 00:11:29.144 "base_bdevs_list": [ 00:11:29.144 { 00:11:29.144 "name": null, 00:11:29.144 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:29.144 "is_configured": false, 00:11:29.144 "data_offset": 0, 00:11:29.144 "data_size": 65536 00:11:29.144 }, 00:11:29.144 { 00:11:29.144 "name": null, 00:11:29.144 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:29.144 "is_configured": false, 00:11:29.144 "data_offset": 0, 00:11:29.144 "data_size": 65536 00:11:29.144 }, 00:11:29.144 { 00:11:29.144 "name": "BaseBdev3", 00:11:29.144 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:29.144 "is_configured": true, 00:11:29.144 "data_offset": 0, 00:11:29.144 "data_size": 65536 00:11:29.144 }, 00:11:29.144 { 00:11:29.144 "name": "BaseBdev4", 00:11:29.144 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:29.144 "is_configured": true, 00:11:29.144 "data_offset": 0, 00:11:29.144 "data_size": 65536 00:11:29.144 } 00:11:29.144 ] 00:11:29.144 }' 00:11:29.144 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.144 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.403 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.403 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.403 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.403 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.403 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.663 [2024-12-12 09:25:03.446138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.663 "name": "Existed_Raid", 00:11:29.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.663 "strip_size_kb": 0, 00:11:29.663 "state": "configuring", 00:11:29.663 "raid_level": "raid1", 00:11:29.663 "superblock": false, 00:11:29.663 "num_base_bdevs": 4, 00:11:29.663 "num_base_bdevs_discovered": 3, 00:11:29.663 "num_base_bdevs_operational": 4, 00:11:29.663 "base_bdevs_list": [ 00:11:29.663 { 00:11:29.663 "name": null, 00:11:29.663 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:29.663 "is_configured": false, 00:11:29.663 "data_offset": 0, 00:11:29.663 "data_size": 65536 00:11:29.663 }, 00:11:29.663 { 00:11:29.663 "name": "BaseBdev2", 00:11:29.663 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:29.663 "is_configured": true, 00:11:29.663 "data_offset": 0, 00:11:29.663 "data_size": 65536 00:11:29.663 }, 00:11:29.663 { 00:11:29.663 "name": "BaseBdev3", 00:11:29.663 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:29.663 "is_configured": true, 00:11:29.663 "data_offset": 0, 00:11:29.663 "data_size": 65536 00:11:29.663 }, 00:11:29.663 { 00:11:29.663 "name": "BaseBdev4", 00:11:29.663 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:29.663 "is_configured": true, 00:11:29.663 "data_offset": 0, 00:11:29.663 "data_size": 65536 00:11:29.663 } 00:11:29.663 ] 00:11:29.663 }' 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.663 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6bfec6f-de03-45bc-a464-0c9e76e01e86 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.182 [2024-12-12 09:25:03.986608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.182 [2024-12-12 09:25:03.986740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.182 [2024-12-12 09:25:03.986771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.182 [2024-12-12 09:25:03.987151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.182 [2024-12-12 09:25:03.987380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.182 [2024-12-12 09:25:03.987424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.182 [2024-12-12 09:25:03.987749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.182 NewBaseBdev 00:11:30.182 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.183 09:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.183 [ 00:11:30.183 { 00:11:30.183 "name": "NewBaseBdev", 00:11:30.183 "aliases": [ 00:11:30.183 "e6bfec6f-de03-45bc-a464-0c9e76e01e86" 00:11:30.183 ], 00:11:30.183 "product_name": "Malloc disk", 00:11:30.183 "block_size": 512, 00:11:30.183 "num_blocks": 65536, 00:11:30.183 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:30.183 "assigned_rate_limits": { 00:11:30.183 "rw_ios_per_sec": 0, 00:11:30.183 "rw_mbytes_per_sec": 0, 00:11:30.183 "r_mbytes_per_sec": 0, 00:11:30.183 "w_mbytes_per_sec": 0 00:11:30.183 }, 00:11:30.183 "claimed": true, 00:11:30.183 "claim_type": "exclusive_write", 00:11:30.183 "zoned": false, 00:11:30.183 "supported_io_types": { 00:11:30.183 "read": true, 00:11:30.183 "write": true, 00:11:30.183 "unmap": true, 00:11:30.183 "flush": true, 00:11:30.183 "reset": true, 00:11:30.183 "nvme_admin": false, 00:11:30.183 "nvme_io": false, 00:11:30.183 "nvme_io_md": false, 00:11:30.183 "write_zeroes": true, 00:11:30.183 "zcopy": true, 00:11:30.183 "get_zone_info": false, 00:11:30.183 "zone_management": false, 00:11:30.183 "zone_append": false, 00:11:30.183 "compare": false, 00:11:30.183 "compare_and_write": false, 00:11:30.183 "abort": true, 00:11:30.183 "seek_hole": false, 00:11:30.183 "seek_data": false, 00:11:30.183 "copy": true, 00:11:30.183 "nvme_iov_md": false 00:11:30.183 }, 00:11:30.183 "memory_domains": [ 00:11:30.183 { 00:11:30.183 "dma_device_id": "system", 00:11:30.183 "dma_device_type": 1 00:11:30.183 }, 00:11:30.183 { 00:11:30.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.183 "dma_device_type": 2 00:11:30.183 } 00:11:30.183 ], 00:11:30.183 "driver_specific": {} 00:11:30.183 } 00:11:30.183 ] 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.183 "name": "Existed_Raid", 00:11:30.183 "uuid": "a86db78e-4e8c-46cf-b995-39c953d65d75", 00:11:30.183 "strip_size_kb": 0, 00:11:30.183 "state": "online", 00:11:30.183 "raid_level": "raid1", 00:11:30.183 "superblock": false, 00:11:30.183 "num_base_bdevs": 4, 00:11:30.183 "num_base_bdevs_discovered": 4, 00:11:30.183 "num_base_bdevs_operational": 4, 00:11:30.183 "base_bdevs_list": [ 00:11:30.183 { 00:11:30.183 "name": "NewBaseBdev", 00:11:30.183 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:30.183 "is_configured": true, 00:11:30.183 "data_offset": 0, 00:11:30.183 "data_size": 65536 00:11:30.183 }, 00:11:30.183 { 00:11:30.183 "name": "BaseBdev2", 00:11:30.183 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:30.183 "is_configured": true, 00:11:30.183 "data_offset": 0, 00:11:30.183 "data_size": 65536 00:11:30.183 }, 00:11:30.183 { 00:11:30.183 "name": "BaseBdev3", 00:11:30.183 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:30.183 "is_configured": true, 00:11:30.183 "data_offset": 0, 00:11:30.183 "data_size": 65536 00:11:30.183 }, 00:11:30.183 { 00:11:30.183 "name": "BaseBdev4", 00:11:30.183 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:30.183 "is_configured": true, 00:11:30.183 "data_offset": 0, 00:11:30.183 "data_size": 65536 00:11:30.183 } 00:11:30.183 ] 00:11:30.183 }' 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.183 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.752 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.753 [2024-12-12 09:25:04.482258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.753 "name": "Existed_Raid", 00:11:30.753 "aliases": [ 00:11:30.753 "a86db78e-4e8c-46cf-b995-39c953d65d75" 00:11:30.753 ], 00:11:30.753 "product_name": "Raid Volume", 00:11:30.753 "block_size": 512, 00:11:30.753 "num_blocks": 65536, 00:11:30.753 "uuid": "a86db78e-4e8c-46cf-b995-39c953d65d75", 00:11:30.753 "assigned_rate_limits": { 00:11:30.753 "rw_ios_per_sec": 0, 00:11:30.753 "rw_mbytes_per_sec": 0, 00:11:30.753 "r_mbytes_per_sec": 0, 00:11:30.753 "w_mbytes_per_sec": 0 00:11:30.753 }, 00:11:30.753 "claimed": false, 00:11:30.753 "zoned": false, 00:11:30.753 "supported_io_types": { 00:11:30.753 "read": true, 00:11:30.753 "write": true, 00:11:30.753 "unmap": false, 00:11:30.753 "flush": false, 00:11:30.753 "reset": true, 00:11:30.753 "nvme_admin": false, 00:11:30.753 "nvme_io": false, 00:11:30.753 "nvme_io_md": false, 00:11:30.753 "write_zeroes": true, 00:11:30.753 "zcopy": false, 00:11:30.753 "get_zone_info": false, 00:11:30.753 "zone_management": false, 00:11:30.753 "zone_append": false, 00:11:30.753 "compare": false, 00:11:30.753 "compare_and_write": false, 00:11:30.753 "abort": false, 00:11:30.753 "seek_hole": false, 00:11:30.753 "seek_data": false, 00:11:30.753 "copy": false, 00:11:30.753 "nvme_iov_md": false 00:11:30.753 }, 00:11:30.753 "memory_domains": [ 00:11:30.753 { 00:11:30.753 "dma_device_id": "system", 00:11:30.753 "dma_device_type": 1 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.753 "dma_device_type": 2 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "system", 00:11:30.753 "dma_device_type": 1 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.753 "dma_device_type": 2 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "system", 00:11:30.753 "dma_device_type": 1 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.753 "dma_device_type": 2 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "system", 00:11:30.753 "dma_device_type": 1 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.753 "dma_device_type": 2 00:11:30.753 } 00:11:30.753 ], 00:11:30.753 "driver_specific": { 00:11:30.753 "raid": { 00:11:30.753 "uuid": "a86db78e-4e8c-46cf-b995-39c953d65d75", 00:11:30.753 "strip_size_kb": 0, 00:11:30.753 "state": "online", 00:11:30.753 "raid_level": "raid1", 00:11:30.753 "superblock": false, 00:11:30.753 "num_base_bdevs": 4, 00:11:30.753 "num_base_bdevs_discovered": 4, 00:11:30.753 "num_base_bdevs_operational": 4, 00:11:30.753 "base_bdevs_list": [ 00:11:30.753 { 00:11:30.753 "name": "NewBaseBdev", 00:11:30.753 "uuid": "e6bfec6f-de03-45bc-a464-0c9e76e01e86", 00:11:30.753 "is_configured": true, 00:11:30.753 "data_offset": 0, 00:11:30.753 "data_size": 65536 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "name": "BaseBdev2", 00:11:30.753 "uuid": "0bf5bb34-d100-4835-9439-b0d7f534bed5", 00:11:30.753 "is_configured": true, 00:11:30.753 "data_offset": 0, 00:11:30.753 "data_size": 65536 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "name": "BaseBdev3", 00:11:30.753 "uuid": "3da67f03-05b8-4b79-91e4-55d5e41e83cc", 00:11:30.753 "is_configured": true, 00:11:30.753 "data_offset": 0, 00:11:30.753 "data_size": 65536 00:11:30.753 }, 00:11:30.753 { 00:11:30.753 "name": "BaseBdev4", 00:11:30.753 "uuid": "f5cf86b5-cc31-408f-8ae7-41f0c968a896", 00:11:30.753 "is_configured": true, 00:11:30.753 "data_offset": 0, 00:11:30.753 "data_size": 65536 00:11:30.753 } 00:11:30.753 ] 00:11:30.753 } 00:11:30.753 } 00:11:30.753 }' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.753 BaseBdev2 00:11:30.753 BaseBdev3 00:11:30.753 BaseBdev4' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.753 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.013 [2024-12-12 09:25:04.797323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.013 [2024-12-12 09:25:04.797352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.013 [2024-12-12 09:25:04.797438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.013 [2024-12-12 09:25:04.797764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.013 [2024-12-12 09:25:04.797778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74319 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74319 ']' 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74319 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74319 00:11:31.013 killing process with pid 74319 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74319' 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74319 00:11:31.013 [2024-12-12 09:25:04.853940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.013 09:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74319 00:11:31.275 [2024-12-12 09:25:05.270469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.657 00:11:32.657 real 0m11.494s 00:11:32.657 user 0m17.934s 00:11:32.657 sys 0m2.186s 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.657 ************************************ 00:11:32.657 END TEST raid_state_function_test 00:11:32.657 ************************************ 00:11:32.657 09:25:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:32.657 09:25:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.657 09:25:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.657 09:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.657 ************************************ 00:11:32.657 START TEST raid_state_function_test_sb 00:11:32.657 ************************************ 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.657 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74991 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74991' 00:11:32.658 Process raid pid: 74991 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74991 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74991 ']' 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.658 09:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.658 [2024-12-12 09:25:06.614875] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:32.658 [2024-12-12 09:25:06.615174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.917 [2024-12-12 09:25:06.794779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.917 [2024-12-12 09:25:06.926399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.177 [2024-12-12 09:25:07.164252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.177 [2024-12-12 09:25:07.164402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.440 [2024-12-12 09:25:07.441734] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.440 [2024-12-12 09:25:07.441898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.440 [2024-12-12 09:25:07.441933] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.440 [2024-12-12 09:25:07.441969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.440 [2024-12-12 09:25:07.441989] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.440 [2024-12-12 09:25:07.442036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.440 [2024-12-12 09:25:07.442057] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.440 [2024-12-12 09:25:07.442078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.440 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.441 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.441 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.441 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.441 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.701 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.701 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.701 "name": "Existed_Raid", 00:11:33.701 "uuid": "e8ec272d-17af-4ca5-afa4-17facd15e44e", 00:11:33.701 "strip_size_kb": 0, 00:11:33.701 "state": "configuring", 00:11:33.701 "raid_level": "raid1", 00:11:33.701 "superblock": true, 00:11:33.701 "num_base_bdevs": 4, 00:11:33.701 "num_base_bdevs_discovered": 0, 00:11:33.701 "num_base_bdevs_operational": 4, 00:11:33.701 "base_bdevs_list": [ 00:11:33.701 { 00:11:33.701 "name": "BaseBdev1", 00:11:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.701 "is_configured": false, 00:11:33.701 "data_offset": 0, 00:11:33.701 "data_size": 0 00:11:33.701 }, 00:11:33.701 { 00:11:33.701 "name": "BaseBdev2", 00:11:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.701 "is_configured": false, 00:11:33.701 "data_offset": 0, 00:11:33.701 "data_size": 0 00:11:33.701 }, 00:11:33.701 { 00:11:33.701 "name": "BaseBdev3", 00:11:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.701 "is_configured": false, 00:11:33.701 "data_offset": 0, 00:11:33.701 "data_size": 0 00:11:33.701 }, 00:11:33.701 { 00:11:33.701 "name": "BaseBdev4", 00:11:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.701 "is_configured": false, 00:11:33.701 "data_offset": 0, 00:11:33.701 "data_size": 0 00:11:33.701 } 00:11:33.701 ] 00:11:33.701 }' 00:11:33.701 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.701 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 [2024-12-12 09:25:07.852970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.961 [2024-12-12 09:25:07.853022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 [2024-12-12 09:25:07.864907] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.961 [2024-12-12 09:25:07.865011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.961 [2024-12-12 09:25:07.865040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.961 [2024-12-12 09:25:07.865064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.961 [2024-12-12 09:25:07.865083] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.961 [2024-12-12 09:25:07.865104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.961 [2024-12-12 09:25:07.865121] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.961 [2024-12-12 09:25:07.865159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 [2024-12-12 09:25:07.918612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.961 BaseBdev1 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.961 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.961 [ 00:11:33.961 { 00:11:33.961 "name": "BaseBdev1", 00:11:33.961 "aliases": [ 00:11:33.961 "a5084de9-0f85-4a5d-b481-6ca3e426ce7d" 00:11:33.961 ], 00:11:33.961 "product_name": "Malloc disk", 00:11:33.962 "block_size": 512, 00:11:33.962 "num_blocks": 65536, 00:11:33.962 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:33.962 "assigned_rate_limits": { 00:11:33.962 "rw_ios_per_sec": 0, 00:11:33.962 "rw_mbytes_per_sec": 0, 00:11:33.962 "r_mbytes_per_sec": 0, 00:11:33.962 "w_mbytes_per_sec": 0 00:11:33.962 }, 00:11:33.962 "claimed": true, 00:11:33.962 "claim_type": "exclusive_write", 00:11:33.962 "zoned": false, 00:11:33.962 "supported_io_types": { 00:11:33.962 "read": true, 00:11:33.962 "write": true, 00:11:33.962 "unmap": true, 00:11:33.962 "flush": true, 00:11:33.962 "reset": true, 00:11:33.962 "nvme_admin": false, 00:11:33.962 "nvme_io": false, 00:11:33.962 "nvme_io_md": false, 00:11:33.962 "write_zeroes": true, 00:11:33.962 "zcopy": true, 00:11:33.962 "get_zone_info": false, 00:11:33.962 "zone_management": false, 00:11:33.962 "zone_append": false, 00:11:33.962 "compare": false, 00:11:33.962 "compare_and_write": false, 00:11:33.962 "abort": true, 00:11:33.962 "seek_hole": false, 00:11:33.962 "seek_data": false, 00:11:33.962 "copy": true, 00:11:33.962 "nvme_iov_md": false 00:11:33.962 }, 00:11:33.962 "memory_domains": [ 00:11:33.962 { 00:11:33.962 "dma_device_id": "system", 00:11:33.962 "dma_device_type": 1 00:11:33.962 }, 00:11:33.962 { 00:11:33.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.962 "dma_device_type": 2 00:11:33.962 } 00:11:33.962 ], 00:11:33.962 "driver_specific": {} 00:11:33.962 } 00:11:33.962 ] 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.962 09:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.221 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.221 "name": "Existed_Raid", 00:11:34.221 "uuid": "810e6813-32e0-47e1-b6ba-45827bf8b554", 00:11:34.221 "strip_size_kb": 0, 00:11:34.221 "state": "configuring", 00:11:34.221 "raid_level": "raid1", 00:11:34.221 "superblock": true, 00:11:34.221 "num_base_bdevs": 4, 00:11:34.221 "num_base_bdevs_discovered": 1, 00:11:34.221 "num_base_bdevs_operational": 4, 00:11:34.221 "base_bdevs_list": [ 00:11:34.221 { 00:11:34.221 "name": "BaseBdev1", 00:11:34.221 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:34.221 "is_configured": true, 00:11:34.221 "data_offset": 2048, 00:11:34.221 "data_size": 63488 00:11:34.221 }, 00:11:34.221 { 00:11:34.221 "name": "BaseBdev2", 00:11:34.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.221 "is_configured": false, 00:11:34.221 "data_offset": 0, 00:11:34.221 "data_size": 0 00:11:34.221 }, 00:11:34.221 { 00:11:34.221 "name": "BaseBdev3", 00:11:34.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.221 "is_configured": false, 00:11:34.221 "data_offset": 0, 00:11:34.221 "data_size": 0 00:11:34.221 }, 00:11:34.221 { 00:11:34.221 "name": "BaseBdev4", 00:11:34.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.221 "is_configured": false, 00:11:34.221 "data_offset": 0, 00:11:34.221 "data_size": 0 00:11:34.221 } 00:11:34.221 ] 00:11:34.221 }' 00:11:34.221 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.221 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 [2024-12-12 09:25:08.405803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.482 [2024-12-12 09:25:08.405856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 [2024-12-12 09:25:08.413849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.482 [2024-12-12 09:25:08.415988] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.482 [2024-12-12 09:25:08.416030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.482 [2024-12-12 09:25:08.416041] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.482 [2024-12-12 09:25:08.416070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.482 [2024-12-12 09:25:08.416078] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.482 [2024-12-12 09:25:08.416087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.482 "name": "Existed_Raid", 00:11:34.482 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:34.482 "strip_size_kb": 0, 00:11:34.482 "state": "configuring", 00:11:34.482 "raid_level": "raid1", 00:11:34.482 "superblock": true, 00:11:34.482 "num_base_bdevs": 4, 00:11:34.482 "num_base_bdevs_discovered": 1, 00:11:34.482 "num_base_bdevs_operational": 4, 00:11:34.482 "base_bdevs_list": [ 00:11:34.482 { 00:11:34.482 "name": "BaseBdev1", 00:11:34.482 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:34.482 "is_configured": true, 00:11:34.482 "data_offset": 2048, 00:11:34.482 "data_size": 63488 00:11:34.482 }, 00:11:34.482 { 00:11:34.482 "name": "BaseBdev2", 00:11:34.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.482 "is_configured": false, 00:11:34.482 "data_offset": 0, 00:11:34.482 "data_size": 0 00:11:34.482 }, 00:11:34.482 { 00:11:34.482 "name": "BaseBdev3", 00:11:34.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.482 "is_configured": false, 00:11:34.482 "data_offset": 0, 00:11:34.482 "data_size": 0 00:11:34.482 }, 00:11:34.482 { 00:11:34.482 "name": "BaseBdev4", 00:11:34.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.482 "is_configured": false, 00:11:34.482 "data_offset": 0, 00:11:34.482 "data_size": 0 00:11:34.482 } 00:11:34.482 ] 00:11:34.482 }' 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.482 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.827 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.827 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.827 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.090 [2024-12-12 09:25:08.892949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.090 BaseBdev2 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.090 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.090 [ 00:11:35.090 { 00:11:35.090 "name": "BaseBdev2", 00:11:35.090 "aliases": [ 00:11:35.090 "286a20db-4189-4606-8d4b-33c06098ab09" 00:11:35.090 ], 00:11:35.090 "product_name": "Malloc disk", 00:11:35.090 "block_size": 512, 00:11:35.090 "num_blocks": 65536, 00:11:35.090 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:35.090 "assigned_rate_limits": { 00:11:35.090 "rw_ios_per_sec": 0, 00:11:35.090 "rw_mbytes_per_sec": 0, 00:11:35.090 "r_mbytes_per_sec": 0, 00:11:35.090 "w_mbytes_per_sec": 0 00:11:35.090 }, 00:11:35.090 "claimed": true, 00:11:35.090 "claim_type": "exclusive_write", 00:11:35.090 "zoned": false, 00:11:35.090 "supported_io_types": { 00:11:35.090 "read": true, 00:11:35.090 "write": true, 00:11:35.090 "unmap": true, 00:11:35.090 "flush": true, 00:11:35.090 "reset": true, 00:11:35.090 "nvme_admin": false, 00:11:35.090 "nvme_io": false, 00:11:35.090 "nvme_io_md": false, 00:11:35.090 "write_zeroes": true, 00:11:35.090 "zcopy": true, 00:11:35.090 "get_zone_info": false, 00:11:35.090 "zone_management": false, 00:11:35.090 "zone_append": false, 00:11:35.090 "compare": false, 00:11:35.090 "compare_and_write": false, 00:11:35.090 "abort": true, 00:11:35.090 "seek_hole": false, 00:11:35.090 "seek_data": false, 00:11:35.090 "copy": true, 00:11:35.090 "nvme_iov_md": false 00:11:35.090 }, 00:11:35.090 "memory_domains": [ 00:11:35.090 { 00:11:35.090 "dma_device_id": "system", 00:11:35.090 "dma_device_type": 1 00:11:35.090 }, 00:11:35.090 { 00:11:35.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.090 "dma_device_type": 2 00:11:35.090 } 00:11:35.090 ], 00:11:35.090 "driver_specific": {} 00:11:35.091 } 00:11:35.091 ] 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.091 "name": "Existed_Raid", 00:11:35.091 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:35.091 "strip_size_kb": 0, 00:11:35.091 "state": "configuring", 00:11:35.091 "raid_level": "raid1", 00:11:35.091 "superblock": true, 00:11:35.091 "num_base_bdevs": 4, 00:11:35.091 "num_base_bdevs_discovered": 2, 00:11:35.091 "num_base_bdevs_operational": 4, 00:11:35.091 "base_bdevs_list": [ 00:11:35.091 { 00:11:35.091 "name": "BaseBdev1", 00:11:35.091 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:35.091 "is_configured": true, 00:11:35.091 "data_offset": 2048, 00:11:35.091 "data_size": 63488 00:11:35.091 }, 00:11:35.091 { 00:11:35.091 "name": "BaseBdev2", 00:11:35.091 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:35.091 "is_configured": true, 00:11:35.091 "data_offset": 2048, 00:11:35.091 "data_size": 63488 00:11:35.091 }, 00:11:35.091 { 00:11:35.091 "name": "BaseBdev3", 00:11:35.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.091 "is_configured": false, 00:11:35.091 "data_offset": 0, 00:11:35.091 "data_size": 0 00:11:35.091 }, 00:11:35.091 { 00:11:35.091 "name": "BaseBdev4", 00:11:35.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.091 "is_configured": false, 00:11:35.091 "data_offset": 0, 00:11:35.091 "data_size": 0 00:11:35.091 } 00:11:35.091 ] 00:11:35.091 }' 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.091 09:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.351 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.351 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.610 [2024-12-12 09:25:09.404928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.610 BaseBdev3 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.610 [ 00:11:35.610 { 00:11:35.610 "name": "BaseBdev3", 00:11:35.610 "aliases": [ 00:11:35.610 "26228a24-b668-4ee8-967f-fcf482b3ca6d" 00:11:35.610 ], 00:11:35.610 "product_name": "Malloc disk", 00:11:35.610 "block_size": 512, 00:11:35.610 "num_blocks": 65536, 00:11:35.610 "uuid": "26228a24-b668-4ee8-967f-fcf482b3ca6d", 00:11:35.610 "assigned_rate_limits": { 00:11:35.610 "rw_ios_per_sec": 0, 00:11:35.610 "rw_mbytes_per_sec": 0, 00:11:35.610 "r_mbytes_per_sec": 0, 00:11:35.610 "w_mbytes_per_sec": 0 00:11:35.610 }, 00:11:35.610 "claimed": true, 00:11:35.610 "claim_type": "exclusive_write", 00:11:35.610 "zoned": false, 00:11:35.610 "supported_io_types": { 00:11:35.610 "read": true, 00:11:35.610 "write": true, 00:11:35.610 "unmap": true, 00:11:35.610 "flush": true, 00:11:35.610 "reset": true, 00:11:35.610 "nvme_admin": false, 00:11:35.610 "nvme_io": false, 00:11:35.610 "nvme_io_md": false, 00:11:35.610 "write_zeroes": true, 00:11:35.610 "zcopy": true, 00:11:35.610 "get_zone_info": false, 00:11:35.610 "zone_management": false, 00:11:35.610 "zone_append": false, 00:11:35.610 "compare": false, 00:11:35.610 "compare_and_write": false, 00:11:35.610 "abort": true, 00:11:35.610 "seek_hole": false, 00:11:35.610 "seek_data": false, 00:11:35.610 "copy": true, 00:11:35.610 "nvme_iov_md": false 00:11:35.610 }, 00:11:35.610 "memory_domains": [ 00:11:35.610 { 00:11:35.610 "dma_device_id": "system", 00:11:35.610 "dma_device_type": 1 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.610 "dma_device_type": 2 00:11:35.610 } 00:11:35.610 ], 00:11:35.610 "driver_specific": {} 00:11:35.610 } 00:11:35.610 ] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.610 "name": "Existed_Raid", 00:11:35.610 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:35.610 "strip_size_kb": 0, 00:11:35.610 "state": "configuring", 00:11:35.610 "raid_level": "raid1", 00:11:35.610 "superblock": true, 00:11:35.610 "num_base_bdevs": 4, 00:11:35.610 "num_base_bdevs_discovered": 3, 00:11:35.610 "num_base_bdevs_operational": 4, 00:11:35.610 "base_bdevs_list": [ 00:11:35.610 { 00:11:35.610 "name": "BaseBdev1", 00:11:35.610 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:35.610 "is_configured": true, 00:11:35.610 "data_offset": 2048, 00:11:35.610 "data_size": 63488 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "name": "BaseBdev2", 00:11:35.610 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:35.610 "is_configured": true, 00:11:35.610 "data_offset": 2048, 00:11:35.610 "data_size": 63488 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "name": "BaseBdev3", 00:11:35.610 "uuid": "26228a24-b668-4ee8-967f-fcf482b3ca6d", 00:11:35.610 "is_configured": true, 00:11:35.610 "data_offset": 2048, 00:11:35.610 "data_size": 63488 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "name": "BaseBdev4", 00:11:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.610 "is_configured": false, 00:11:35.610 "data_offset": 0, 00:11:35.610 "data_size": 0 00:11:35.610 } 00:11:35.610 ] 00:11:35.610 }' 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.610 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.870 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.870 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.870 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.129 [2024-12-12 09:25:09.901582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.129 [2024-12-12 09:25:09.902028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.129 [2024-12-12 09:25:09.902084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.129 [2024-12-12 09:25:09.902416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.129 BaseBdev4 00:11:36.129 [2024-12-12 09:25:09.902631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.129 [2024-12-12 09:25:09.902655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.129 [2024-12-12 09:25:09.902808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.129 [ 00:11:36.129 { 00:11:36.129 "name": "BaseBdev4", 00:11:36.129 "aliases": [ 00:11:36.129 "cef86652-47eb-47a7-9ef6-ec9b81f2b36b" 00:11:36.129 ], 00:11:36.129 "product_name": "Malloc disk", 00:11:36.129 "block_size": 512, 00:11:36.129 "num_blocks": 65536, 00:11:36.129 "uuid": "cef86652-47eb-47a7-9ef6-ec9b81f2b36b", 00:11:36.129 "assigned_rate_limits": { 00:11:36.129 "rw_ios_per_sec": 0, 00:11:36.129 "rw_mbytes_per_sec": 0, 00:11:36.129 "r_mbytes_per_sec": 0, 00:11:36.129 "w_mbytes_per_sec": 0 00:11:36.129 }, 00:11:36.129 "claimed": true, 00:11:36.129 "claim_type": "exclusive_write", 00:11:36.129 "zoned": false, 00:11:36.129 "supported_io_types": { 00:11:36.129 "read": true, 00:11:36.129 "write": true, 00:11:36.129 "unmap": true, 00:11:36.129 "flush": true, 00:11:36.129 "reset": true, 00:11:36.129 "nvme_admin": false, 00:11:36.129 "nvme_io": false, 00:11:36.129 "nvme_io_md": false, 00:11:36.129 "write_zeroes": true, 00:11:36.129 "zcopy": true, 00:11:36.129 "get_zone_info": false, 00:11:36.129 "zone_management": false, 00:11:36.129 "zone_append": false, 00:11:36.129 "compare": false, 00:11:36.129 "compare_and_write": false, 00:11:36.129 "abort": true, 00:11:36.129 "seek_hole": false, 00:11:36.129 "seek_data": false, 00:11:36.129 "copy": true, 00:11:36.129 "nvme_iov_md": false 00:11:36.129 }, 00:11:36.129 "memory_domains": [ 00:11:36.129 { 00:11:36.129 "dma_device_id": "system", 00:11:36.129 "dma_device_type": 1 00:11:36.129 }, 00:11:36.129 { 00:11:36.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.129 "dma_device_type": 2 00:11:36.129 } 00:11:36.129 ], 00:11:36.129 "driver_specific": {} 00:11:36.129 } 00:11:36.129 ] 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.129 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.130 "name": "Existed_Raid", 00:11:36.130 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:36.130 "strip_size_kb": 0, 00:11:36.130 "state": "online", 00:11:36.130 "raid_level": "raid1", 00:11:36.130 "superblock": true, 00:11:36.130 "num_base_bdevs": 4, 00:11:36.130 "num_base_bdevs_discovered": 4, 00:11:36.130 "num_base_bdevs_operational": 4, 00:11:36.130 "base_bdevs_list": [ 00:11:36.130 { 00:11:36.130 "name": "BaseBdev1", 00:11:36.130 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:36.130 "is_configured": true, 00:11:36.130 "data_offset": 2048, 00:11:36.130 "data_size": 63488 00:11:36.130 }, 00:11:36.130 { 00:11:36.130 "name": "BaseBdev2", 00:11:36.130 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:36.130 "is_configured": true, 00:11:36.130 "data_offset": 2048, 00:11:36.130 "data_size": 63488 00:11:36.130 }, 00:11:36.130 { 00:11:36.130 "name": "BaseBdev3", 00:11:36.130 "uuid": "26228a24-b668-4ee8-967f-fcf482b3ca6d", 00:11:36.130 "is_configured": true, 00:11:36.130 "data_offset": 2048, 00:11:36.130 "data_size": 63488 00:11:36.130 }, 00:11:36.130 { 00:11:36.130 "name": "BaseBdev4", 00:11:36.130 "uuid": "cef86652-47eb-47a7-9ef6-ec9b81f2b36b", 00:11:36.130 "is_configured": true, 00:11:36.130 "data_offset": 2048, 00:11:36.130 "data_size": 63488 00:11:36.130 } 00:11:36.130 ] 00:11:36.130 }' 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.130 09:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.389 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.389 [2024-12-12 09:25:10.397120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.650 "name": "Existed_Raid", 00:11:36.650 "aliases": [ 00:11:36.650 "8f26db20-000c-42a6-bc86-7d7c473457aa" 00:11:36.650 ], 00:11:36.650 "product_name": "Raid Volume", 00:11:36.650 "block_size": 512, 00:11:36.650 "num_blocks": 63488, 00:11:36.650 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:36.650 "assigned_rate_limits": { 00:11:36.650 "rw_ios_per_sec": 0, 00:11:36.650 "rw_mbytes_per_sec": 0, 00:11:36.650 "r_mbytes_per_sec": 0, 00:11:36.650 "w_mbytes_per_sec": 0 00:11:36.650 }, 00:11:36.650 "claimed": false, 00:11:36.650 "zoned": false, 00:11:36.650 "supported_io_types": { 00:11:36.650 "read": true, 00:11:36.650 "write": true, 00:11:36.650 "unmap": false, 00:11:36.650 "flush": false, 00:11:36.650 "reset": true, 00:11:36.650 "nvme_admin": false, 00:11:36.650 "nvme_io": false, 00:11:36.650 "nvme_io_md": false, 00:11:36.650 "write_zeroes": true, 00:11:36.650 "zcopy": false, 00:11:36.650 "get_zone_info": false, 00:11:36.650 "zone_management": false, 00:11:36.650 "zone_append": false, 00:11:36.650 "compare": false, 00:11:36.650 "compare_and_write": false, 00:11:36.650 "abort": false, 00:11:36.650 "seek_hole": false, 00:11:36.650 "seek_data": false, 00:11:36.650 "copy": false, 00:11:36.650 "nvme_iov_md": false 00:11:36.650 }, 00:11:36.650 "memory_domains": [ 00:11:36.650 { 00:11:36.650 "dma_device_id": "system", 00:11:36.650 "dma_device_type": 1 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.650 "dma_device_type": 2 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "system", 00:11:36.650 "dma_device_type": 1 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.650 "dma_device_type": 2 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "system", 00:11:36.650 "dma_device_type": 1 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.650 "dma_device_type": 2 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "system", 00:11:36.650 "dma_device_type": 1 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.650 "dma_device_type": 2 00:11:36.650 } 00:11:36.650 ], 00:11:36.650 "driver_specific": { 00:11:36.650 "raid": { 00:11:36.650 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:36.650 "strip_size_kb": 0, 00:11:36.650 "state": "online", 00:11:36.650 "raid_level": "raid1", 00:11:36.650 "superblock": true, 00:11:36.650 "num_base_bdevs": 4, 00:11:36.650 "num_base_bdevs_discovered": 4, 00:11:36.650 "num_base_bdevs_operational": 4, 00:11:36.650 "base_bdevs_list": [ 00:11:36.650 { 00:11:36.650 "name": "BaseBdev1", 00:11:36.650 "uuid": "a5084de9-0f85-4a5d-b481-6ca3e426ce7d", 00:11:36.650 "is_configured": true, 00:11:36.650 "data_offset": 2048, 00:11:36.650 "data_size": 63488 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "name": "BaseBdev2", 00:11:36.650 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:36.650 "is_configured": true, 00:11:36.650 "data_offset": 2048, 00:11:36.650 "data_size": 63488 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "name": "BaseBdev3", 00:11:36.650 "uuid": "26228a24-b668-4ee8-967f-fcf482b3ca6d", 00:11:36.650 "is_configured": true, 00:11:36.650 "data_offset": 2048, 00:11:36.650 "data_size": 63488 00:11:36.650 }, 00:11:36.650 { 00:11:36.650 "name": "BaseBdev4", 00:11:36.650 "uuid": "cef86652-47eb-47a7-9ef6-ec9b81f2b36b", 00:11:36.650 "is_configured": true, 00:11:36.650 "data_offset": 2048, 00:11:36.650 "data_size": 63488 00:11:36.650 } 00:11:36.650 ] 00:11:36.650 } 00:11:36.650 } 00:11:36.650 }' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.650 BaseBdev2 00:11:36.650 BaseBdev3 00:11:36.650 BaseBdev4' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.650 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 [2024-12-12 09:25:10.744211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.910 "name": "Existed_Raid", 00:11:36.910 "uuid": "8f26db20-000c-42a6-bc86-7d7c473457aa", 00:11:36.910 "strip_size_kb": 0, 00:11:36.910 "state": "online", 00:11:36.910 "raid_level": "raid1", 00:11:36.910 "superblock": true, 00:11:36.910 "num_base_bdevs": 4, 00:11:36.910 "num_base_bdevs_discovered": 3, 00:11:36.910 "num_base_bdevs_operational": 3, 00:11:36.910 "base_bdevs_list": [ 00:11:36.910 { 00:11:36.910 "name": null, 00:11:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.910 "is_configured": false, 00:11:36.910 "data_offset": 0, 00:11:36.910 "data_size": 63488 00:11:36.910 }, 00:11:36.910 { 00:11:36.910 "name": "BaseBdev2", 00:11:36.910 "uuid": "286a20db-4189-4606-8d4b-33c06098ab09", 00:11:36.910 "is_configured": true, 00:11:36.910 "data_offset": 2048, 00:11:36.910 "data_size": 63488 00:11:36.910 }, 00:11:36.910 { 00:11:36.910 "name": "BaseBdev3", 00:11:36.910 "uuid": "26228a24-b668-4ee8-967f-fcf482b3ca6d", 00:11:36.910 "is_configured": true, 00:11:36.910 "data_offset": 2048, 00:11:36.910 "data_size": 63488 00:11:36.910 }, 00:11:36.910 { 00:11:36.910 "name": "BaseBdev4", 00:11:36.910 "uuid": "cef86652-47eb-47a7-9ef6-ec9b81f2b36b", 00:11:36.910 "is_configured": true, 00:11:36.910 "data_offset": 2048, 00:11:36.910 "data_size": 63488 00:11:36.910 } 00:11:36.910 ] 00:11:36.910 }' 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.910 09:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 [2024-12-12 09:25:11.309278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.479 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 [2024-12-12 09:25:11.464230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.739 [2024-12-12 09:25:11.619993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.739 [2024-12-12 09:25:11.620121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.739 [2024-12-12 09:25:11.721719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.739 [2024-12-12 09:25:11.721863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.739 [2024-12-12 09:25:11.721910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.739 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.998 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.998 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.998 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.998 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.998 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 BaseBdev2 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 [ 00:11:37.999 { 00:11:37.999 "name": "BaseBdev2", 00:11:37.999 "aliases": [ 00:11:37.999 "bc58377b-a15b-4b18-a366-d0afd37767ce" 00:11:37.999 ], 00:11:37.999 "product_name": "Malloc disk", 00:11:37.999 "block_size": 512, 00:11:37.999 "num_blocks": 65536, 00:11:37.999 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:37.999 "assigned_rate_limits": { 00:11:37.999 "rw_ios_per_sec": 0, 00:11:37.999 "rw_mbytes_per_sec": 0, 00:11:37.999 "r_mbytes_per_sec": 0, 00:11:37.999 "w_mbytes_per_sec": 0 00:11:37.999 }, 00:11:37.999 "claimed": false, 00:11:37.999 "zoned": false, 00:11:37.999 "supported_io_types": { 00:11:37.999 "read": true, 00:11:37.999 "write": true, 00:11:37.999 "unmap": true, 00:11:37.999 "flush": true, 00:11:37.999 "reset": true, 00:11:37.999 "nvme_admin": false, 00:11:37.999 "nvme_io": false, 00:11:37.999 "nvme_io_md": false, 00:11:37.999 "write_zeroes": true, 00:11:37.999 "zcopy": true, 00:11:37.999 "get_zone_info": false, 00:11:37.999 "zone_management": false, 00:11:37.999 "zone_append": false, 00:11:37.999 "compare": false, 00:11:37.999 "compare_and_write": false, 00:11:37.999 "abort": true, 00:11:37.999 "seek_hole": false, 00:11:37.999 "seek_data": false, 00:11:37.999 "copy": true, 00:11:37.999 "nvme_iov_md": false 00:11:37.999 }, 00:11:37.999 "memory_domains": [ 00:11:37.999 { 00:11:37.999 "dma_device_id": "system", 00:11:37.999 "dma_device_type": 1 00:11:37.999 }, 00:11:37.999 { 00:11:37.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.999 "dma_device_type": 2 00:11:37.999 } 00:11:37.999 ], 00:11:37.999 "driver_specific": {} 00:11:37.999 } 00:11:37.999 ] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 BaseBdev3 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 [ 00:11:37.999 { 00:11:37.999 "name": "BaseBdev3", 00:11:37.999 "aliases": [ 00:11:37.999 "263f252b-a695-48d7-a7c0-59fea34246a3" 00:11:37.999 ], 00:11:37.999 "product_name": "Malloc disk", 00:11:37.999 "block_size": 512, 00:11:37.999 "num_blocks": 65536, 00:11:37.999 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:37.999 "assigned_rate_limits": { 00:11:37.999 "rw_ios_per_sec": 0, 00:11:37.999 "rw_mbytes_per_sec": 0, 00:11:37.999 "r_mbytes_per_sec": 0, 00:11:37.999 "w_mbytes_per_sec": 0 00:11:37.999 }, 00:11:37.999 "claimed": false, 00:11:37.999 "zoned": false, 00:11:37.999 "supported_io_types": { 00:11:37.999 "read": true, 00:11:37.999 "write": true, 00:11:37.999 "unmap": true, 00:11:37.999 "flush": true, 00:11:37.999 "reset": true, 00:11:37.999 "nvme_admin": false, 00:11:37.999 "nvme_io": false, 00:11:37.999 "nvme_io_md": false, 00:11:37.999 "write_zeroes": true, 00:11:37.999 "zcopy": true, 00:11:37.999 "get_zone_info": false, 00:11:37.999 "zone_management": false, 00:11:37.999 "zone_append": false, 00:11:37.999 "compare": false, 00:11:37.999 "compare_and_write": false, 00:11:37.999 "abort": true, 00:11:37.999 "seek_hole": false, 00:11:37.999 "seek_data": false, 00:11:37.999 "copy": true, 00:11:37.999 "nvme_iov_md": false 00:11:37.999 }, 00:11:37.999 "memory_domains": [ 00:11:37.999 { 00:11:37.999 "dma_device_id": "system", 00:11:37.999 "dma_device_type": 1 00:11:37.999 }, 00:11:37.999 { 00:11:37.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.999 "dma_device_type": 2 00:11:37.999 } 00:11:37.999 ], 00:11:37.999 "driver_specific": {} 00:11:37.999 } 00:11:37.999 ] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.999 BaseBdev4 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.999 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.000 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.000 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.000 09:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 [ 00:11:38.000 { 00:11:38.000 "name": "BaseBdev4", 00:11:38.000 "aliases": [ 00:11:38.000 "9422bb57-2939-45b9-a8cd-f6f546f27d48" 00:11:38.000 ], 00:11:38.000 "product_name": "Malloc disk", 00:11:38.000 "block_size": 512, 00:11:38.000 "num_blocks": 65536, 00:11:38.000 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:38.000 "assigned_rate_limits": { 00:11:38.000 "rw_ios_per_sec": 0, 00:11:38.000 "rw_mbytes_per_sec": 0, 00:11:38.000 "r_mbytes_per_sec": 0, 00:11:38.000 "w_mbytes_per_sec": 0 00:11:38.000 }, 00:11:38.000 "claimed": false, 00:11:38.000 "zoned": false, 00:11:38.000 "supported_io_types": { 00:11:38.000 "read": true, 00:11:38.000 "write": true, 00:11:38.000 "unmap": true, 00:11:38.000 "flush": true, 00:11:38.000 "reset": true, 00:11:38.000 "nvme_admin": false, 00:11:38.000 "nvme_io": false, 00:11:38.000 "nvme_io_md": false, 00:11:38.000 "write_zeroes": true, 00:11:38.000 "zcopy": true, 00:11:38.000 "get_zone_info": false, 00:11:38.000 "zone_management": false, 00:11:38.000 "zone_append": false, 00:11:38.000 "compare": false, 00:11:38.000 "compare_and_write": false, 00:11:38.000 "abort": true, 00:11:38.000 "seek_hole": false, 00:11:38.000 "seek_data": false, 00:11:38.000 "copy": true, 00:11:38.000 "nvme_iov_md": false 00:11:38.000 }, 00:11:38.000 "memory_domains": [ 00:11:38.000 { 00:11:38.000 "dma_device_id": "system", 00:11:38.000 "dma_device_type": 1 00:11:38.000 }, 00:11:38.000 { 00:11:38.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.000 "dma_device_type": 2 00:11:38.000 } 00:11:38.000 ], 00:11:38.000 "driver_specific": {} 00:11:38.000 } 00:11:38.000 ] 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.000 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.259 [2024-12-12 09:25:12.026070] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.259 [2024-12-12 09:25:12.026175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.259 [2024-12-12 09:25:12.026215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.259 [2024-12-12 09:25:12.028353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.259 [2024-12-12 09:25:12.028449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.259 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.259 "name": "Existed_Raid", 00:11:38.259 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:38.259 "strip_size_kb": 0, 00:11:38.259 "state": "configuring", 00:11:38.259 "raid_level": "raid1", 00:11:38.259 "superblock": true, 00:11:38.259 "num_base_bdevs": 4, 00:11:38.259 "num_base_bdevs_discovered": 3, 00:11:38.259 "num_base_bdevs_operational": 4, 00:11:38.259 "base_bdevs_list": [ 00:11:38.259 { 00:11:38.259 "name": "BaseBdev1", 00:11:38.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.259 "is_configured": false, 00:11:38.259 "data_offset": 0, 00:11:38.259 "data_size": 0 00:11:38.259 }, 00:11:38.259 { 00:11:38.259 "name": "BaseBdev2", 00:11:38.260 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 }, 00:11:38.260 { 00:11:38.260 "name": "BaseBdev3", 00:11:38.260 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 }, 00:11:38.260 { 00:11:38.260 "name": "BaseBdev4", 00:11:38.260 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 } 00:11:38.260 ] 00:11:38.260 }' 00:11:38.260 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.260 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.519 [2024-12-12 09:25:12.501212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.519 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.779 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.779 "name": "Existed_Raid", 00:11:38.779 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:38.779 "strip_size_kb": 0, 00:11:38.779 "state": "configuring", 00:11:38.779 "raid_level": "raid1", 00:11:38.779 "superblock": true, 00:11:38.779 "num_base_bdevs": 4, 00:11:38.779 "num_base_bdevs_discovered": 2, 00:11:38.779 "num_base_bdevs_operational": 4, 00:11:38.779 "base_bdevs_list": [ 00:11:38.779 { 00:11:38.779 "name": "BaseBdev1", 00:11:38.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.779 "is_configured": false, 00:11:38.779 "data_offset": 0, 00:11:38.779 "data_size": 0 00:11:38.779 }, 00:11:38.779 { 00:11:38.779 "name": null, 00:11:38.779 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:38.779 "is_configured": false, 00:11:38.779 "data_offset": 0, 00:11:38.779 "data_size": 63488 00:11:38.779 }, 00:11:38.779 { 00:11:38.779 "name": "BaseBdev3", 00:11:38.779 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:38.779 "is_configured": true, 00:11:38.779 "data_offset": 2048, 00:11:38.779 "data_size": 63488 00:11:38.779 }, 00:11:38.779 { 00:11:38.779 "name": "BaseBdev4", 00:11:38.779 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:38.779 "is_configured": true, 00:11:38.779 "data_offset": 2048, 00:11:38.779 "data_size": 63488 00:11:38.779 } 00:11:38.779 ] 00:11:38.779 }' 00:11:38.779 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.779 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.038 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.038 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.038 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.038 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.038 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.039 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.039 09:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.039 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.039 09:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.039 [2024-12-12 09:25:13.042622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.039 BaseBdev1 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.039 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.298 [ 00:11:39.298 { 00:11:39.298 "name": "BaseBdev1", 00:11:39.298 "aliases": [ 00:11:39.298 "d55f399b-0750-4695-9854-3ee94407896d" 00:11:39.298 ], 00:11:39.298 "product_name": "Malloc disk", 00:11:39.298 "block_size": 512, 00:11:39.298 "num_blocks": 65536, 00:11:39.298 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:39.298 "assigned_rate_limits": { 00:11:39.298 "rw_ios_per_sec": 0, 00:11:39.298 "rw_mbytes_per_sec": 0, 00:11:39.298 "r_mbytes_per_sec": 0, 00:11:39.298 "w_mbytes_per_sec": 0 00:11:39.298 }, 00:11:39.298 "claimed": true, 00:11:39.298 "claim_type": "exclusive_write", 00:11:39.298 "zoned": false, 00:11:39.298 "supported_io_types": { 00:11:39.298 "read": true, 00:11:39.298 "write": true, 00:11:39.298 "unmap": true, 00:11:39.298 "flush": true, 00:11:39.298 "reset": true, 00:11:39.298 "nvme_admin": false, 00:11:39.298 "nvme_io": false, 00:11:39.298 "nvme_io_md": false, 00:11:39.298 "write_zeroes": true, 00:11:39.298 "zcopy": true, 00:11:39.298 "get_zone_info": false, 00:11:39.298 "zone_management": false, 00:11:39.298 "zone_append": false, 00:11:39.298 "compare": false, 00:11:39.298 "compare_and_write": false, 00:11:39.298 "abort": true, 00:11:39.298 "seek_hole": false, 00:11:39.298 "seek_data": false, 00:11:39.298 "copy": true, 00:11:39.298 "nvme_iov_md": false 00:11:39.298 }, 00:11:39.298 "memory_domains": [ 00:11:39.298 { 00:11:39.298 "dma_device_id": "system", 00:11:39.298 "dma_device_type": 1 00:11:39.298 }, 00:11:39.298 { 00:11:39.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.298 "dma_device_type": 2 00:11:39.298 } 00:11:39.298 ], 00:11:39.299 "driver_specific": {} 00:11:39.299 } 00:11:39.299 ] 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.299 "name": "Existed_Raid", 00:11:39.299 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:39.299 "strip_size_kb": 0, 00:11:39.299 "state": "configuring", 00:11:39.299 "raid_level": "raid1", 00:11:39.299 "superblock": true, 00:11:39.299 "num_base_bdevs": 4, 00:11:39.299 "num_base_bdevs_discovered": 3, 00:11:39.299 "num_base_bdevs_operational": 4, 00:11:39.299 "base_bdevs_list": [ 00:11:39.299 { 00:11:39.299 "name": "BaseBdev1", 00:11:39.299 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:39.299 "is_configured": true, 00:11:39.299 "data_offset": 2048, 00:11:39.299 "data_size": 63488 00:11:39.299 }, 00:11:39.299 { 00:11:39.299 "name": null, 00:11:39.299 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:39.299 "is_configured": false, 00:11:39.299 "data_offset": 0, 00:11:39.299 "data_size": 63488 00:11:39.299 }, 00:11:39.299 { 00:11:39.299 "name": "BaseBdev3", 00:11:39.299 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:39.299 "is_configured": true, 00:11:39.299 "data_offset": 2048, 00:11:39.299 "data_size": 63488 00:11:39.299 }, 00:11:39.299 { 00:11:39.299 "name": "BaseBdev4", 00:11:39.299 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:39.299 "is_configured": true, 00:11:39.299 "data_offset": 2048, 00:11:39.299 "data_size": 63488 00:11:39.299 } 00:11:39.299 ] 00:11:39.299 }' 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.299 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.559 [2024-12-12 09:25:13.533815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.559 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.819 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.819 "name": "Existed_Raid", 00:11:39.819 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:39.819 "strip_size_kb": 0, 00:11:39.819 "state": "configuring", 00:11:39.819 "raid_level": "raid1", 00:11:39.819 "superblock": true, 00:11:39.819 "num_base_bdevs": 4, 00:11:39.819 "num_base_bdevs_discovered": 2, 00:11:39.819 "num_base_bdevs_operational": 4, 00:11:39.819 "base_bdevs_list": [ 00:11:39.819 { 00:11:39.819 "name": "BaseBdev1", 00:11:39.819 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:39.819 "is_configured": true, 00:11:39.819 "data_offset": 2048, 00:11:39.819 "data_size": 63488 00:11:39.819 }, 00:11:39.819 { 00:11:39.819 "name": null, 00:11:39.819 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:39.819 "is_configured": false, 00:11:39.819 "data_offset": 0, 00:11:39.819 "data_size": 63488 00:11:39.819 }, 00:11:39.819 { 00:11:39.819 "name": null, 00:11:39.819 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:39.819 "is_configured": false, 00:11:39.819 "data_offset": 0, 00:11:39.819 "data_size": 63488 00:11:39.819 }, 00:11:39.819 { 00:11:39.819 "name": "BaseBdev4", 00:11:39.819 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:39.819 "is_configured": true, 00:11:39.819 "data_offset": 2048, 00:11:39.819 "data_size": 63488 00:11:39.819 } 00:11:39.819 ] 00:11:39.819 }' 00:11:39.819 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.819 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.079 09:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.079 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 09:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 [2024-12-12 09:25:14.009001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.079 "name": "Existed_Raid", 00:11:40.079 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:40.079 "strip_size_kb": 0, 00:11:40.079 "state": "configuring", 00:11:40.079 "raid_level": "raid1", 00:11:40.079 "superblock": true, 00:11:40.079 "num_base_bdevs": 4, 00:11:40.079 "num_base_bdevs_discovered": 3, 00:11:40.079 "num_base_bdevs_operational": 4, 00:11:40.079 "base_bdevs_list": [ 00:11:40.079 { 00:11:40.079 "name": "BaseBdev1", 00:11:40.079 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:40.079 "is_configured": true, 00:11:40.079 "data_offset": 2048, 00:11:40.079 "data_size": 63488 00:11:40.079 }, 00:11:40.079 { 00:11:40.079 "name": null, 00:11:40.079 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:40.079 "is_configured": false, 00:11:40.079 "data_offset": 0, 00:11:40.079 "data_size": 63488 00:11:40.079 }, 00:11:40.079 { 00:11:40.079 "name": "BaseBdev3", 00:11:40.079 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:40.079 "is_configured": true, 00:11:40.079 "data_offset": 2048, 00:11:40.079 "data_size": 63488 00:11:40.079 }, 00:11:40.079 { 00:11:40.079 "name": "BaseBdev4", 00:11:40.079 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:40.079 "is_configured": true, 00:11:40.079 "data_offset": 2048, 00:11:40.079 "data_size": 63488 00:11:40.079 } 00:11:40.079 ] 00:11:40.079 }' 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.079 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.649 [2024-12-12 09:25:14.444280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.649 "name": "Existed_Raid", 00:11:40.649 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:40.649 "strip_size_kb": 0, 00:11:40.649 "state": "configuring", 00:11:40.649 "raid_level": "raid1", 00:11:40.649 "superblock": true, 00:11:40.649 "num_base_bdevs": 4, 00:11:40.649 "num_base_bdevs_discovered": 2, 00:11:40.649 "num_base_bdevs_operational": 4, 00:11:40.649 "base_bdevs_list": [ 00:11:40.649 { 00:11:40.649 "name": null, 00:11:40.649 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:40.649 "is_configured": false, 00:11:40.649 "data_offset": 0, 00:11:40.649 "data_size": 63488 00:11:40.649 }, 00:11:40.649 { 00:11:40.649 "name": null, 00:11:40.649 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:40.649 "is_configured": false, 00:11:40.649 "data_offset": 0, 00:11:40.649 "data_size": 63488 00:11:40.649 }, 00:11:40.649 { 00:11:40.649 "name": "BaseBdev3", 00:11:40.649 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:40.649 "is_configured": true, 00:11:40.649 "data_offset": 2048, 00:11:40.649 "data_size": 63488 00:11:40.649 }, 00:11:40.649 { 00:11:40.649 "name": "BaseBdev4", 00:11:40.649 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:40.649 "is_configured": true, 00:11:40.649 "data_offset": 2048, 00:11:40.649 "data_size": 63488 00:11:40.649 } 00:11:40.649 ] 00:11:40.649 }' 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.649 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.219 [2024-12-12 09:25:14.968870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.219 09:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.219 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.219 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.219 "name": "Existed_Raid", 00:11:41.219 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:41.219 "strip_size_kb": 0, 00:11:41.219 "state": "configuring", 00:11:41.219 "raid_level": "raid1", 00:11:41.219 "superblock": true, 00:11:41.219 "num_base_bdevs": 4, 00:11:41.219 "num_base_bdevs_discovered": 3, 00:11:41.219 "num_base_bdevs_operational": 4, 00:11:41.219 "base_bdevs_list": [ 00:11:41.219 { 00:11:41.219 "name": null, 00:11:41.219 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:41.219 "is_configured": false, 00:11:41.219 "data_offset": 0, 00:11:41.219 "data_size": 63488 00:11:41.219 }, 00:11:41.219 { 00:11:41.219 "name": "BaseBdev2", 00:11:41.219 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:41.219 "is_configured": true, 00:11:41.219 "data_offset": 2048, 00:11:41.219 "data_size": 63488 00:11:41.219 }, 00:11:41.219 { 00:11:41.219 "name": "BaseBdev3", 00:11:41.219 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:41.219 "is_configured": true, 00:11:41.219 "data_offset": 2048, 00:11:41.219 "data_size": 63488 00:11:41.219 }, 00:11:41.219 { 00:11:41.219 "name": "BaseBdev4", 00:11:41.219 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:41.219 "is_configured": true, 00:11:41.219 "data_offset": 2048, 00:11:41.219 "data_size": 63488 00:11:41.219 } 00:11:41.219 ] 00:11:41.219 }' 00:11:41.219 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.219 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d55f399b-0750-4695-9854-3ee94407896d 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.479 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.739 [2024-12-12 09:25:15.533900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.739 [2024-12-12 09:25:15.534285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.739 [2024-12-12 09:25:15.534343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.739 [2024-12-12 09:25:15.534648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.739 [2024-12-12 09:25:15.534861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.739 NewBaseBdev 00:11:41.739 [2024-12-12 09:25:15.534911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.739 [2024-12-12 09:25:15.535107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.739 [ 00:11:41.739 { 00:11:41.739 "name": "NewBaseBdev", 00:11:41.739 "aliases": [ 00:11:41.739 "d55f399b-0750-4695-9854-3ee94407896d" 00:11:41.739 ], 00:11:41.739 "product_name": "Malloc disk", 00:11:41.739 "block_size": 512, 00:11:41.739 "num_blocks": 65536, 00:11:41.739 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:41.739 "assigned_rate_limits": { 00:11:41.739 "rw_ios_per_sec": 0, 00:11:41.739 "rw_mbytes_per_sec": 0, 00:11:41.739 "r_mbytes_per_sec": 0, 00:11:41.739 "w_mbytes_per_sec": 0 00:11:41.739 }, 00:11:41.739 "claimed": true, 00:11:41.739 "claim_type": "exclusive_write", 00:11:41.739 "zoned": false, 00:11:41.739 "supported_io_types": { 00:11:41.739 "read": true, 00:11:41.739 "write": true, 00:11:41.739 "unmap": true, 00:11:41.739 "flush": true, 00:11:41.739 "reset": true, 00:11:41.739 "nvme_admin": false, 00:11:41.739 "nvme_io": false, 00:11:41.739 "nvme_io_md": false, 00:11:41.739 "write_zeroes": true, 00:11:41.739 "zcopy": true, 00:11:41.739 "get_zone_info": false, 00:11:41.739 "zone_management": false, 00:11:41.739 "zone_append": false, 00:11:41.739 "compare": false, 00:11:41.739 "compare_and_write": false, 00:11:41.739 "abort": true, 00:11:41.739 "seek_hole": false, 00:11:41.739 "seek_data": false, 00:11:41.739 "copy": true, 00:11:41.739 "nvme_iov_md": false 00:11:41.739 }, 00:11:41.739 "memory_domains": [ 00:11:41.739 { 00:11:41.739 "dma_device_id": "system", 00:11:41.739 "dma_device_type": 1 00:11:41.739 }, 00:11:41.739 { 00:11:41.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.739 "dma_device_type": 2 00:11:41.739 } 00:11:41.739 ], 00:11:41.739 "driver_specific": {} 00:11:41.739 } 00:11:41.739 ] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.739 "name": "Existed_Raid", 00:11:41.739 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:41.739 "strip_size_kb": 0, 00:11:41.739 "state": "online", 00:11:41.739 "raid_level": "raid1", 00:11:41.739 "superblock": true, 00:11:41.739 "num_base_bdevs": 4, 00:11:41.739 "num_base_bdevs_discovered": 4, 00:11:41.739 "num_base_bdevs_operational": 4, 00:11:41.739 "base_bdevs_list": [ 00:11:41.739 { 00:11:41.739 "name": "NewBaseBdev", 00:11:41.739 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:41.739 "is_configured": true, 00:11:41.739 "data_offset": 2048, 00:11:41.739 "data_size": 63488 00:11:41.739 }, 00:11:41.739 { 00:11:41.739 "name": "BaseBdev2", 00:11:41.739 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:41.739 "is_configured": true, 00:11:41.739 "data_offset": 2048, 00:11:41.739 "data_size": 63488 00:11:41.739 }, 00:11:41.739 { 00:11:41.739 "name": "BaseBdev3", 00:11:41.739 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:41.739 "is_configured": true, 00:11:41.739 "data_offset": 2048, 00:11:41.739 "data_size": 63488 00:11:41.739 }, 00:11:41.739 { 00:11:41.739 "name": "BaseBdev4", 00:11:41.739 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:41.739 "is_configured": true, 00:11:41.739 "data_offset": 2048, 00:11:41.739 "data_size": 63488 00:11:41.739 } 00:11:41.739 ] 00:11:41.739 }' 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.739 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.999 09:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.999 [2024-12-12 09:25:15.997514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.999 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.258 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.258 "name": "Existed_Raid", 00:11:42.258 "aliases": [ 00:11:42.258 "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5" 00:11:42.258 ], 00:11:42.258 "product_name": "Raid Volume", 00:11:42.258 "block_size": 512, 00:11:42.258 "num_blocks": 63488, 00:11:42.258 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:42.258 "assigned_rate_limits": { 00:11:42.258 "rw_ios_per_sec": 0, 00:11:42.258 "rw_mbytes_per_sec": 0, 00:11:42.258 "r_mbytes_per_sec": 0, 00:11:42.258 "w_mbytes_per_sec": 0 00:11:42.258 }, 00:11:42.258 "claimed": false, 00:11:42.258 "zoned": false, 00:11:42.258 "supported_io_types": { 00:11:42.258 "read": true, 00:11:42.258 "write": true, 00:11:42.258 "unmap": false, 00:11:42.258 "flush": false, 00:11:42.258 "reset": true, 00:11:42.258 "nvme_admin": false, 00:11:42.258 "nvme_io": false, 00:11:42.258 "nvme_io_md": false, 00:11:42.258 "write_zeroes": true, 00:11:42.258 "zcopy": false, 00:11:42.258 "get_zone_info": false, 00:11:42.258 "zone_management": false, 00:11:42.258 "zone_append": false, 00:11:42.258 "compare": false, 00:11:42.258 "compare_and_write": false, 00:11:42.258 "abort": false, 00:11:42.258 "seek_hole": false, 00:11:42.258 "seek_data": false, 00:11:42.258 "copy": false, 00:11:42.258 "nvme_iov_md": false 00:11:42.258 }, 00:11:42.258 "memory_domains": [ 00:11:42.258 { 00:11:42.258 "dma_device_id": "system", 00:11:42.258 "dma_device_type": 1 00:11:42.258 }, 00:11:42.258 { 00:11:42.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.258 "dma_device_type": 2 00:11:42.258 }, 00:11:42.258 { 00:11:42.258 "dma_device_id": "system", 00:11:42.258 "dma_device_type": 1 00:11:42.258 }, 00:11:42.258 { 00:11:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.259 "dma_device_type": 2 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "dma_device_id": "system", 00:11:42.259 "dma_device_type": 1 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.259 "dma_device_type": 2 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "dma_device_id": "system", 00:11:42.259 "dma_device_type": 1 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.259 "dma_device_type": 2 00:11:42.259 } 00:11:42.259 ], 00:11:42.259 "driver_specific": { 00:11:42.259 "raid": { 00:11:42.259 "uuid": "a72ce486-c40d-4f73-8f3a-41f9d08cd6f5", 00:11:42.259 "strip_size_kb": 0, 00:11:42.259 "state": "online", 00:11:42.259 "raid_level": "raid1", 00:11:42.259 "superblock": true, 00:11:42.259 "num_base_bdevs": 4, 00:11:42.259 "num_base_bdevs_discovered": 4, 00:11:42.259 "num_base_bdevs_operational": 4, 00:11:42.259 "base_bdevs_list": [ 00:11:42.259 { 00:11:42.259 "name": "NewBaseBdev", 00:11:42.259 "uuid": "d55f399b-0750-4695-9854-3ee94407896d", 00:11:42.259 "is_configured": true, 00:11:42.259 "data_offset": 2048, 00:11:42.259 "data_size": 63488 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "name": "BaseBdev2", 00:11:42.259 "uuid": "bc58377b-a15b-4b18-a366-d0afd37767ce", 00:11:42.259 "is_configured": true, 00:11:42.259 "data_offset": 2048, 00:11:42.259 "data_size": 63488 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "name": "BaseBdev3", 00:11:42.259 "uuid": "263f252b-a695-48d7-a7c0-59fea34246a3", 00:11:42.259 "is_configured": true, 00:11:42.259 "data_offset": 2048, 00:11:42.259 "data_size": 63488 00:11:42.259 }, 00:11:42.259 { 00:11:42.259 "name": "BaseBdev4", 00:11:42.259 "uuid": "9422bb57-2939-45b9-a8cd-f6f546f27d48", 00:11:42.259 "is_configured": true, 00:11:42.259 "data_offset": 2048, 00:11:42.259 "data_size": 63488 00:11:42.259 } 00:11:42.259 ] 00:11:42.259 } 00:11:42.259 } 00:11:42.259 }' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.259 BaseBdev2 00:11:42.259 BaseBdev3 00:11:42.259 BaseBdev4' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.259 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.518 [2024-12-12 09:25:16.296586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.518 [2024-12-12 09:25:16.296619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.518 [2024-12-12 09:25:16.296700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.518 [2024-12-12 09:25:16.297039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.518 [2024-12-12 09:25:16.297055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74991 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74991 ']' 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74991 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.518 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74991 00:11:42.518 killing process with pid 74991 00:11:42.519 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.519 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.519 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74991' 00:11:42.519 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74991 00:11:42.519 [2024-12-12 09:25:16.343467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.519 09:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74991 00:11:42.778 [2024-12-12 09:25:16.760980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.158 09:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.158 00:11:44.158 real 0m11.462s 00:11:44.158 user 0m17.931s 00:11:44.158 sys 0m2.157s 00:11:44.158 09:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.158 09:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.158 ************************************ 00:11:44.158 END TEST raid_state_function_test_sb 00:11:44.158 ************************************ 00:11:44.158 09:25:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:44.158 09:25:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.158 09:25:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.158 09:25:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.158 ************************************ 00:11:44.158 START TEST raid_superblock_test 00:11:44.158 ************************************ 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75656 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75656 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75656 ']' 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.158 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.158 [2024-12-12 09:25:18.137342] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:44.158 [2024-12-12 09:25:18.137461] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75656 ] 00:11:44.417 [2024-12-12 09:25:18.304973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.417 [2024-12-12 09:25:18.436287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.676 [2024-12-12 09:25:18.675426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.676 [2024-12-12 09:25:18.675474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 malloc1 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 [2024-12-12 09:25:19.022996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.245 [2024-12-12 09:25:19.023144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.245 [2024-12-12 09:25:19.023188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.245 [2024-12-12 09:25:19.023218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.245 [2024-12-12 09:25:19.025688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.245 [2024-12-12 09:25:19.025777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.245 pt1 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 malloc2 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 [2024-12-12 09:25:19.085863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.245 [2024-12-12 09:25:19.085975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.245 [2024-12-12 09:25:19.086016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.245 [2024-12-12 09:25:19.086044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.245 [2024-12-12 09:25:19.088434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.245 [2024-12-12 09:25:19.088515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.245 pt2 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 malloc3 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 [2024-12-12 09:25:19.159971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.245 [2024-12-12 09:25:19.160080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.245 [2024-12-12 09:25:19.160121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.245 [2024-12-12 09:25:19.160149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.245 [2024-12-12 09:25:19.162535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.245 [2024-12-12 09:25:19.162618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.245 pt3 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 malloc4 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.245 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.245 [2024-12-12 09:25:19.223759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.245 [2024-12-12 09:25:19.223875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.246 [2024-12-12 09:25:19.223901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:45.246 [2024-12-12 09:25:19.223911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.246 [2024-12-12 09:25:19.226315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.246 [2024-12-12 09:25:19.226348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.246 pt4 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.246 [2024-12-12 09:25:19.235776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.246 [2024-12-12 09:25:19.237781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.246 [2024-12-12 09:25:19.237917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.246 [2024-12-12 09:25:19.238016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.246 [2024-12-12 09:25:19.238224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.246 [2024-12-12 09:25:19.238242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.246 [2024-12-12 09:25:19.238495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.246 [2024-12-12 09:25:19.238686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.246 [2024-12-12 09:25:19.238703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.246 [2024-12-12 09:25:19.238852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.246 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.505 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.505 "name": "raid_bdev1", 00:11:45.505 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:45.505 "strip_size_kb": 0, 00:11:45.505 "state": "online", 00:11:45.505 "raid_level": "raid1", 00:11:45.505 "superblock": true, 00:11:45.505 "num_base_bdevs": 4, 00:11:45.505 "num_base_bdevs_discovered": 4, 00:11:45.505 "num_base_bdevs_operational": 4, 00:11:45.505 "base_bdevs_list": [ 00:11:45.505 { 00:11:45.505 "name": "pt1", 00:11:45.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.505 "is_configured": true, 00:11:45.505 "data_offset": 2048, 00:11:45.505 "data_size": 63488 00:11:45.505 }, 00:11:45.505 { 00:11:45.505 "name": "pt2", 00:11:45.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.505 "is_configured": true, 00:11:45.505 "data_offset": 2048, 00:11:45.505 "data_size": 63488 00:11:45.505 }, 00:11:45.505 { 00:11:45.505 "name": "pt3", 00:11:45.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.505 "is_configured": true, 00:11:45.505 "data_offset": 2048, 00:11:45.505 "data_size": 63488 00:11:45.505 }, 00:11:45.505 { 00:11:45.505 "name": "pt4", 00:11:45.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.505 "is_configured": true, 00:11:45.505 "data_offset": 2048, 00:11:45.505 "data_size": 63488 00:11:45.505 } 00:11:45.505 ] 00:11:45.505 }' 00:11:45.506 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.506 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.765 [2024-12-12 09:25:19.679407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.765 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.765 "name": "raid_bdev1", 00:11:45.765 "aliases": [ 00:11:45.765 "1059a42b-1a9f-4709-bf5b-ab9700b53b01" 00:11:45.765 ], 00:11:45.765 "product_name": "Raid Volume", 00:11:45.765 "block_size": 512, 00:11:45.765 "num_blocks": 63488, 00:11:45.765 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:45.765 "assigned_rate_limits": { 00:11:45.765 "rw_ios_per_sec": 0, 00:11:45.765 "rw_mbytes_per_sec": 0, 00:11:45.765 "r_mbytes_per_sec": 0, 00:11:45.765 "w_mbytes_per_sec": 0 00:11:45.765 }, 00:11:45.765 "claimed": false, 00:11:45.765 "zoned": false, 00:11:45.765 "supported_io_types": { 00:11:45.765 "read": true, 00:11:45.765 "write": true, 00:11:45.765 "unmap": false, 00:11:45.765 "flush": false, 00:11:45.765 "reset": true, 00:11:45.765 "nvme_admin": false, 00:11:45.765 "nvme_io": false, 00:11:45.765 "nvme_io_md": false, 00:11:45.765 "write_zeroes": true, 00:11:45.765 "zcopy": false, 00:11:45.765 "get_zone_info": false, 00:11:45.765 "zone_management": false, 00:11:45.765 "zone_append": false, 00:11:45.765 "compare": false, 00:11:45.765 "compare_and_write": false, 00:11:45.765 "abort": false, 00:11:45.765 "seek_hole": false, 00:11:45.765 "seek_data": false, 00:11:45.765 "copy": false, 00:11:45.765 "nvme_iov_md": false 00:11:45.765 }, 00:11:45.765 "memory_domains": [ 00:11:45.765 { 00:11:45.765 "dma_device_id": "system", 00:11:45.765 "dma_device_type": 1 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.765 "dma_device_type": 2 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "system", 00:11:45.765 "dma_device_type": 1 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.765 "dma_device_type": 2 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "system", 00:11:45.765 "dma_device_type": 1 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.765 "dma_device_type": 2 00:11:45.765 }, 00:11:45.765 { 00:11:45.765 "dma_device_id": "system", 00:11:45.765 "dma_device_type": 1 00:11:45.766 }, 00:11:45.766 { 00:11:45.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.766 "dma_device_type": 2 00:11:45.766 } 00:11:45.766 ], 00:11:45.766 "driver_specific": { 00:11:45.766 "raid": { 00:11:45.766 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:45.766 "strip_size_kb": 0, 00:11:45.766 "state": "online", 00:11:45.766 "raid_level": "raid1", 00:11:45.766 "superblock": true, 00:11:45.766 "num_base_bdevs": 4, 00:11:45.766 "num_base_bdevs_discovered": 4, 00:11:45.766 "num_base_bdevs_operational": 4, 00:11:45.766 "base_bdevs_list": [ 00:11:45.766 { 00:11:45.766 "name": "pt1", 00:11:45.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.766 "is_configured": true, 00:11:45.766 "data_offset": 2048, 00:11:45.766 "data_size": 63488 00:11:45.766 }, 00:11:45.766 { 00:11:45.766 "name": "pt2", 00:11:45.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.766 "is_configured": true, 00:11:45.766 "data_offset": 2048, 00:11:45.766 "data_size": 63488 00:11:45.766 }, 00:11:45.766 { 00:11:45.766 "name": "pt3", 00:11:45.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.766 "is_configured": true, 00:11:45.766 "data_offset": 2048, 00:11:45.766 "data_size": 63488 00:11:45.766 }, 00:11:45.766 { 00:11:45.766 "name": "pt4", 00:11:45.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.766 "is_configured": true, 00:11:45.766 "data_offset": 2048, 00:11:45.766 "data_size": 63488 00:11:45.766 } 00:11:45.766 ] 00:11:45.766 } 00:11:45.766 } 00:11:45.766 }' 00:11:45.766 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.766 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.766 pt2 00:11:45.766 pt3 00:11:45.766 pt4' 00:11:45.766 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.026 09:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.026 [2024-12-12 09:25:20.010745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.026 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1059a42b-1a9f-4709-bf5b-ab9700b53b01 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1059a42b-1a9f-4709-bf5b-ab9700b53b01 ']' 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.286 [2024-12-12 09:25:20.062384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.286 [2024-12-12 09:25:20.062456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.286 [2024-12-12 09:25:20.062564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.286 [2024-12-12 09:25:20.062655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.286 [2024-12-12 09:25:20.062671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.286 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 [2024-12-12 09:25:20.230126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.287 [2024-12-12 09:25:20.232341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.287 [2024-12-12 09:25:20.232440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.287 [2024-12-12 09:25:20.232496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:46.287 [2024-12-12 09:25:20.232597] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.287 [2024-12-12 09:25:20.232668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.287 [2024-12-12 09:25:20.232688] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.287 [2024-12-12 09:25:20.232707] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:46.287 [2024-12-12 09:25:20.232719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.287 [2024-12-12 09:25:20.232734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.287 request: 00:11:46.287 { 00:11:46.287 "name": "raid_bdev1", 00:11:46.287 "raid_level": "raid1", 00:11:46.287 "base_bdevs": [ 00:11:46.287 "malloc1", 00:11:46.287 "malloc2", 00:11:46.287 "malloc3", 00:11:46.287 "malloc4" 00:11:46.287 ], 00:11:46.287 "superblock": false, 00:11:46.287 "method": "bdev_raid_create", 00:11:46.287 "req_id": 1 00:11:46.287 } 00:11:46.287 Got JSON-RPC error response 00:11:46.287 response: 00:11:46.287 { 00:11:46.287 "code": -17, 00:11:46.287 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.287 } 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 [2024-12-12 09:25:20.294029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.287 [2024-12-12 09:25:20.294142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.287 [2024-12-12 09:25:20.294178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:46.287 [2024-12-12 09:25:20.294208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.287 [2024-12-12 09:25:20.296759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.287 [2024-12-12 09:25:20.296841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.287 [2024-12-12 09:25:20.296969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.287 [2024-12-12 09:25:20.297069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.287 pt1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.547 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.547 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.547 "name": "raid_bdev1", 00:11:46.547 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:46.547 "strip_size_kb": 0, 00:11:46.547 "state": "configuring", 00:11:46.547 "raid_level": "raid1", 00:11:46.547 "superblock": true, 00:11:46.547 "num_base_bdevs": 4, 00:11:46.547 "num_base_bdevs_discovered": 1, 00:11:46.547 "num_base_bdevs_operational": 4, 00:11:46.547 "base_bdevs_list": [ 00:11:46.547 { 00:11:46.547 "name": "pt1", 00:11:46.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.547 "is_configured": true, 00:11:46.547 "data_offset": 2048, 00:11:46.547 "data_size": 63488 00:11:46.547 }, 00:11:46.547 { 00:11:46.547 "name": null, 00:11:46.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.547 "is_configured": false, 00:11:46.547 "data_offset": 2048, 00:11:46.547 "data_size": 63488 00:11:46.547 }, 00:11:46.547 { 00:11:46.547 "name": null, 00:11:46.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.547 "is_configured": false, 00:11:46.547 "data_offset": 2048, 00:11:46.547 "data_size": 63488 00:11:46.547 }, 00:11:46.547 { 00:11:46.547 "name": null, 00:11:46.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.547 "is_configured": false, 00:11:46.547 "data_offset": 2048, 00:11:46.547 "data_size": 63488 00:11:46.547 } 00:11:46.547 ] 00:11:46.547 }' 00:11:46.547 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.547 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.806 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.806 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.806 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.807 [2024-12-12 09:25:20.785193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.807 [2024-12-12 09:25:20.785276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.807 [2024-12-12 09:25:20.785302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:46.807 [2024-12-12 09:25:20.785314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.807 [2024-12-12 09:25:20.785855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.807 [2024-12-12 09:25:20.785889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.807 [2024-12-12 09:25:20.786004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.807 [2024-12-12 09:25:20.786033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.807 pt2 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.807 [2024-12-12 09:25:20.797167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.807 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.066 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.066 "name": "raid_bdev1", 00:11:47.066 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:47.066 "strip_size_kb": 0, 00:11:47.066 "state": "configuring", 00:11:47.066 "raid_level": "raid1", 00:11:47.066 "superblock": true, 00:11:47.066 "num_base_bdevs": 4, 00:11:47.066 "num_base_bdevs_discovered": 1, 00:11:47.066 "num_base_bdevs_operational": 4, 00:11:47.066 "base_bdevs_list": [ 00:11:47.066 { 00:11:47.066 "name": "pt1", 00:11:47.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.066 "is_configured": true, 00:11:47.066 "data_offset": 2048, 00:11:47.066 "data_size": 63488 00:11:47.066 }, 00:11:47.066 { 00:11:47.066 "name": null, 00:11:47.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.066 "is_configured": false, 00:11:47.066 "data_offset": 0, 00:11:47.066 "data_size": 63488 00:11:47.066 }, 00:11:47.066 { 00:11:47.066 "name": null, 00:11:47.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.066 "is_configured": false, 00:11:47.066 "data_offset": 2048, 00:11:47.066 "data_size": 63488 00:11:47.066 }, 00:11:47.066 { 00:11:47.066 "name": null, 00:11:47.066 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.066 "is_configured": false, 00:11:47.066 "data_offset": 2048, 00:11:47.066 "data_size": 63488 00:11:47.066 } 00:11:47.066 ] 00:11:47.066 }' 00:11:47.066 09:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.066 09:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.327 [2024-12-12 09:25:21.248347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.327 [2024-12-12 09:25:21.248471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.327 [2024-12-12 09:25:21.248510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.327 [2024-12-12 09:25:21.248536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.327 [2024-12-12 09:25:21.249093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.327 [2024-12-12 09:25:21.249153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.327 [2024-12-12 09:25:21.249267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.327 [2024-12-12 09:25:21.249317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.327 pt2 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.327 [2024-12-12 09:25:21.260307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.327 [2024-12-12 09:25:21.260408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.327 [2024-12-12 09:25:21.260443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:47.327 [2024-12-12 09:25:21.260469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.327 [2024-12-12 09:25:21.260858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.327 [2024-12-12 09:25:21.260914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.327 [2024-12-12 09:25:21.261013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.327 [2024-12-12 09:25:21.261059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.327 pt3 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.327 [2024-12-12 09:25:21.272274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.327 [2024-12-12 09:25:21.272354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.327 [2024-12-12 09:25:21.272386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.327 [2024-12-12 09:25:21.272412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.327 [2024-12-12 09:25:21.272832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.327 [2024-12-12 09:25:21.272888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.327 [2024-12-12 09:25:21.272982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.327 [2024-12-12 09:25:21.273010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.327 [2024-12-12 09:25:21.273173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.327 [2024-12-12 09:25:21.273182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.327 [2024-12-12 09:25:21.273448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.327 [2024-12-12 09:25:21.273603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.327 [2024-12-12 09:25:21.273617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.327 [2024-12-12 09:25:21.273764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.327 pt4 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.327 "name": "raid_bdev1", 00:11:47.327 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:47.327 "strip_size_kb": 0, 00:11:47.327 "state": "online", 00:11:47.327 "raid_level": "raid1", 00:11:47.327 "superblock": true, 00:11:47.327 "num_base_bdevs": 4, 00:11:47.327 "num_base_bdevs_discovered": 4, 00:11:47.327 "num_base_bdevs_operational": 4, 00:11:47.327 "base_bdevs_list": [ 00:11:47.327 { 00:11:47.327 "name": "pt1", 00:11:47.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.327 "is_configured": true, 00:11:47.327 "data_offset": 2048, 00:11:47.327 "data_size": 63488 00:11:47.327 }, 00:11:47.327 { 00:11:47.327 "name": "pt2", 00:11:47.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.327 "is_configured": true, 00:11:47.327 "data_offset": 2048, 00:11:47.327 "data_size": 63488 00:11:47.327 }, 00:11:47.327 { 00:11:47.327 "name": "pt3", 00:11:47.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.327 "is_configured": true, 00:11:47.327 "data_offset": 2048, 00:11:47.327 "data_size": 63488 00:11:47.327 }, 00:11:47.327 { 00:11:47.327 "name": "pt4", 00:11:47.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.327 "is_configured": true, 00:11:47.327 "data_offset": 2048, 00:11:47.327 "data_size": 63488 00:11:47.327 } 00:11:47.327 ] 00:11:47.327 }' 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.327 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.897 [2024-12-12 09:25:21.691986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.897 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.897 "name": "raid_bdev1", 00:11:47.897 "aliases": [ 00:11:47.897 "1059a42b-1a9f-4709-bf5b-ab9700b53b01" 00:11:47.897 ], 00:11:47.897 "product_name": "Raid Volume", 00:11:47.897 "block_size": 512, 00:11:47.897 "num_blocks": 63488, 00:11:47.897 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:47.897 "assigned_rate_limits": { 00:11:47.897 "rw_ios_per_sec": 0, 00:11:47.897 "rw_mbytes_per_sec": 0, 00:11:47.897 "r_mbytes_per_sec": 0, 00:11:47.897 "w_mbytes_per_sec": 0 00:11:47.897 }, 00:11:47.897 "claimed": false, 00:11:47.897 "zoned": false, 00:11:47.897 "supported_io_types": { 00:11:47.897 "read": true, 00:11:47.897 "write": true, 00:11:47.897 "unmap": false, 00:11:47.897 "flush": false, 00:11:47.897 "reset": true, 00:11:47.897 "nvme_admin": false, 00:11:47.897 "nvme_io": false, 00:11:47.897 "nvme_io_md": false, 00:11:47.897 "write_zeroes": true, 00:11:47.897 "zcopy": false, 00:11:47.897 "get_zone_info": false, 00:11:47.897 "zone_management": false, 00:11:47.897 "zone_append": false, 00:11:47.897 "compare": false, 00:11:47.897 "compare_and_write": false, 00:11:47.897 "abort": false, 00:11:47.897 "seek_hole": false, 00:11:47.897 "seek_data": false, 00:11:47.897 "copy": false, 00:11:47.897 "nvme_iov_md": false 00:11:47.897 }, 00:11:47.897 "memory_domains": [ 00:11:47.897 { 00:11:47.897 "dma_device_id": "system", 00:11:47.897 "dma_device_type": 1 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.897 "dma_device_type": 2 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "system", 00:11:47.897 "dma_device_type": 1 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.897 "dma_device_type": 2 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "system", 00:11:47.897 "dma_device_type": 1 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.897 "dma_device_type": 2 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "system", 00:11:47.897 "dma_device_type": 1 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.897 "dma_device_type": 2 00:11:47.897 } 00:11:47.897 ], 00:11:47.897 "driver_specific": { 00:11:47.897 "raid": { 00:11:47.897 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:47.897 "strip_size_kb": 0, 00:11:47.897 "state": "online", 00:11:47.897 "raid_level": "raid1", 00:11:47.897 "superblock": true, 00:11:47.897 "num_base_bdevs": 4, 00:11:47.897 "num_base_bdevs_discovered": 4, 00:11:47.897 "num_base_bdevs_operational": 4, 00:11:47.897 "base_bdevs_list": [ 00:11:47.897 { 00:11:47.897 "name": "pt1", 00:11:47.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.897 "is_configured": true, 00:11:47.897 "data_offset": 2048, 00:11:47.897 "data_size": 63488 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "name": "pt2", 00:11:47.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.897 "is_configured": true, 00:11:47.897 "data_offset": 2048, 00:11:47.897 "data_size": 63488 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "name": "pt3", 00:11:47.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.897 "is_configured": true, 00:11:47.897 "data_offset": 2048, 00:11:47.897 "data_size": 63488 00:11:47.897 }, 00:11:47.897 { 00:11:47.897 "name": "pt4", 00:11:47.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.898 "is_configured": true, 00:11:47.898 "data_offset": 2048, 00:11:47.898 "data_size": 63488 00:11:47.898 } 00:11:47.898 ] 00:11:47.898 } 00:11:47.898 } 00:11:47.898 }' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.898 pt2 00:11:47.898 pt3 00:11:47.898 pt4' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.898 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.159 09:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.159 [2024-12-12 09:25:22.019284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1059a42b-1a9f-4709-bf5b-ab9700b53b01 '!=' 1059a42b-1a9f-4709-bf5b-ab9700b53b01 ']' 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.159 [2024-12-12 09:25:22.066975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.159 "name": "raid_bdev1", 00:11:48.159 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:48.159 "strip_size_kb": 0, 00:11:48.159 "state": "online", 00:11:48.159 "raid_level": "raid1", 00:11:48.159 "superblock": true, 00:11:48.159 "num_base_bdevs": 4, 00:11:48.159 "num_base_bdevs_discovered": 3, 00:11:48.159 "num_base_bdevs_operational": 3, 00:11:48.159 "base_bdevs_list": [ 00:11:48.159 { 00:11:48.159 "name": null, 00:11:48.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.159 "is_configured": false, 00:11:48.159 "data_offset": 0, 00:11:48.159 "data_size": 63488 00:11:48.159 }, 00:11:48.159 { 00:11:48.159 "name": "pt2", 00:11:48.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.159 "is_configured": true, 00:11:48.159 "data_offset": 2048, 00:11:48.159 "data_size": 63488 00:11:48.159 }, 00:11:48.159 { 00:11:48.159 "name": "pt3", 00:11:48.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.159 "is_configured": true, 00:11:48.159 "data_offset": 2048, 00:11:48.159 "data_size": 63488 00:11:48.159 }, 00:11:48.159 { 00:11:48.159 "name": "pt4", 00:11:48.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.159 "is_configured": true, 00:11:48.159 "data_offset": 2048, 00:11:48.159 "data_size": 63488 00:11:48.159 } 00:11:48.159 ] 00:11:48.159 }' 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.159 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 [2024-12-12 09:25:22.454262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.729 [2024-12-12 09:25:22.454336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.729 [2024-12-12 09:25:22.454421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.729 [2024-12-12 09:25:22.454554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.729 [2024-12-12 09:25:22.454635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 [2024-12-12 09:25:22.550103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.729 [2024-12-12 09:25:22.550152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.729 [2024-12-12 09:25:22.550171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:48.729 [2024-12-12 09:25:22.550179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.729 [2024-12-12 09:25:22.552734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.729 [2024-12-12 09:25:22.552771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.729 [2024-12-12 09:25:22.552855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.729 [2024-12-12 09:25:22.552907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.729 pt2 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.729 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.729 "name": "raid_bdev1", 00:11:48.729 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:48.729 "strip_size_kb": 0, 00:11:48.729 "state": "configuring", 00:11:48.729 "raid_level": "raid1", 00:11:48.729 "superblock": true, 00:11:48.729 "num_base_bdevs": 4, 00:11:48.729 "num_base_bdevs_discovered": 1, 00:11:48.729 "num_base_bdevs_operational": 3, 00:11:48.729 "base_bdevs_list": [ 00:11:48.729 { 00:11:48.729 "name": null, 00:11:48.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.729 "is_configured": false, 00:11:48.729 "data_offset": 2048, 00:11:48.729 "data_size": 63488 00:11:48.729 }, 00:11:48.729 { 00:11:48.729 "name": "pt2", 00:11:48.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.729 "is_configured": true, 00:11:48.729 "data_offset": 2048, 00:11:48.729 "data_size": 63488 00:11:48.729 }, 00:11:48.729 { 00:11:48.729 "name": null, 00:11:48.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.729 "is_configured": false, 00:11:48.730 "data_offset": 2048, 00:11:48.730 "data_size": 63488 00:11:48.730 }, 00:11:48.730 { 00:11:48.730 "name": null, 00:11:48.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.730 "is_configured": false, 00:11:48.730 "data_offset": 2048, 00:11:48.730 "data_size": 63488 00:11:48.730 } 00:11:48.730 ] 00:11:48.730 }' 00:11:48.730 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.730 09:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 09:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:48.989 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:48.989 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.989 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.989 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 [2024-12-12 09:25:23.009371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.989 [2024-12-12 09:25:23.009491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.989 [2024-12-12 09:25:23.009534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:48.989 [2024-12-12 09:25:23.009562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.989 [2024-12-12 09:25:23.010115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.989 [2024-12-12 09:25:23.010190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.989 [2024-12-12 09:25:23.010315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.989 [2024-12-12 09:25:23.010369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.250 pt3 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.250 "name": "raid_bdev1", 00:11:49.250 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:49.250 "strip_size_kb": 0, 00:11:49.250 "state": "configuring", 00:11:49.250 "raid_level": "raid1", 00:11:49.250 "superblock": true, 00:11:49.250 "num_base_bdevs": 4, 00:11:49.250 "num_base_bdevs_discovered": 2, 00:11:49.250 "num_base_bdevs_operational": 3, 00:11:49.250 "base_bdevs_list": [ 00:11:49.250 { 00:11:49.250 "name": null, 00:11:49.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.250 "is_configured": false, 00:11:49.250 "data_offset": 2048, 00:11:49.250 "data_size": 63488 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": "pt2", 00:11:49.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.250 "is_configured": true, 00:11:49.250 "data_offset": 2048, 00:11:49.250 "data_size": 63488 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": "pt3", 00:11:49.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.250 "is_configured": true, 00:11:49.250 "data_offset": 2048, 00:11:49.250 "data_size": 63488 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": null, 00:11:49.250 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.250 "is_configured": false, 00:11:49.250 "data_offset": 2048, 00:11:49.250 "data_size": 63488 00:11:49.250 } 00:11:49.250 ] 00:11:49.250 }' 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.250 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.510 [2024-12-12 09:25:23.464640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.510 [2024-12-12 09:25:23.464747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.510 [2024-12-12 09:25:23.464778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:49.510 [2024-12-12 09:25:23.464787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.510 [2024-12-12 09:25:23.465349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.510 [2024-12-12 09:25:23.465374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.510 [2024-12-12 09:25:23.465483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.510 [2024-12-12 09:25:23.465509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.510 [2024-12-12 09:25:23.465655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:49.510 [2024-12-12 09:25:23.465664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.510 [2024-12-12 09:25:23.465932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:49.510 [2024-12-12 09:25:23.466125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:49.510 [2024-12-12 09:25:23.466144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:49.510 [2024-12-12 09:25:23.466297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.510 pt4 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.510 "name": "raid_bdev1", 00:11:49.510 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:49.510 "strip_size_kb": 0, 00:11:49.510 "state": "online", 00:11:49.510 "raid_level": "raid1", 00:11:49.510 "superblock": true, 00:11:49.510 "num_base_bdevs": 4, 00:11:49.510 "num_base_bdevs_discovered": 3, 00:11:49.510 "num_base_bdevs_operational": 3, 00:11:49.510 "base_bdevs_list": [ 00:11:49.510 { 00:11:49.510 "name": null, 00:11:49.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.510 "is_configured": false, 00:11:49.510 "data_offset": 2048, 00:11:49.510 "data_size": 63488 00:11:49.510 }, 00:11:49.510 { 00:11:49.510 "name": "pt2", 00:11:49.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.510 "is_configured": true, 00:11:49.510 "data_offset": 2048, 00:11:49.510 "data_size": 63488 00:11:49.510 }, 00:11:49.510 { 00:11:49.510 "name": "pt3", 00:11:49.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.510 "is_configured": true, 00:11:49.510 "data_offset": 2048, 00:11:49.510 "data_size": 63488 00:11:49.510 }, 00:11:49.510 { 00:11:49.510 "name": "pt4", 00:11:49.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.510 "is_configured": true, 00:11:49.510 "data_offset": 2048, 00:11:49.510 "data_size": 63488 00:11:49.510 } 00:11:49.510 ] 00:11:49.510 }' 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.510 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 [2024-12-12 09:25:23.859895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.080 [2024-12-12 09:25:23.859932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.080 [2024-12-12 09:25:23.860053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.080 [2024-12-12 09:25:23.860143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.080 [2024-12-12 09:25:23.860156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 [2024-12-12 09:25:23.935740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.080 [2024-12-12 09:25:23.935825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.080 [2024-12-12 09:25:23.935848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:50.080 [2024-12-12 09:25:23.935862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.080 [2024-12-12 09:25:23.938448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.080 [2024-12-12 09:25:23.938489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.080 [2024-12-12 09:25:23.938578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:50.080 [2024-12-12 09:25:23.938627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.080 [2024-12-12 09:25:23.938784] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:50.080 [2024-12-12 09:25:23.938820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.080 [2024-12-12 09:25:23.938836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:50.080 [2024-12-12 09:25:23.938895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.080 [2024-12-12 09:25:23.939004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.080 pt1 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.080 "name": "raid_bdev1", 00:11:50.080 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:50.080 "strip_size_kb": 0, 00:11:50.080 "state": "configuring", 00:11:50.080 "raid_level": "raid1", 00:11:50.080 "superblock": true, 00:11:50.080 "num_base_bdevs": 4, 00:11:50.080 "num_base_bdevs_discovered": 2, 00:11:50.080 "num_base_bdevs_operational": 3, 00:11:50.080 "base_bdevs_list": [ 00:11:50.080 { 00:11:50.080 "name": null, 00:11:50.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.080 "is_configured": false, 00:11:50.080 "data_offset": 2048, 00:11:50.080 "data_size": 63488 00:11:50.080 }, 00:11:50.080 { 00:11:50.080 "name": "pt2", 00:11:50.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.080 "is_configured": true, 00:11:50.080 "data_offset": 2048, 00:11:50.080 "data_size": 63488 00:11:50.080 }, 00:11:50.080 { 00:11:50.080 "name": "pt3", 00:11:50.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.080 "is_configured": true, 00:11:50.080 "data_offset": 2048, 00:11:50.080 "data_size": 63488 00:11:50.080 }, 00:11:50.080 { 00:11:50.080 "name": null, 00:11:50.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.080 "is_configured": false, 00:11:50.080 "data_offset": 2048, 00:11:50.080 "data_size": 63488 00:11:50.080 } 00:11:50.080 ] 00:11:50.080 }' 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.080 09:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.340 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:50.340 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:50.340 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.340 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.340 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.600 [2024-12-12 09:25:24.379067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:50.600 [2024-12-12 09:25:24.379194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.600 [2024-12-12 09:25:24.379240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:50.600 [2024-12-12 09:25:24.379271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.600 [2024-12-12 09:25:24.379829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.600 [2024-12-12 09:25:24.379887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:50.600 [2024-12-12 09:25:24.380021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:50.600 [2024-12-12 09:25:24.380076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:50.600 [2024-12-12 09:25:24.380237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:50.600 [2024-12-12 09:25:24.380277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.600 [2024-12-12 09:25:24.380583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:50.600 [2024-12-12 09:25:24.380772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:50.600 [2024-12-12 09:25:24.380815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:50.600 [2024-12-12 09:25:24.381045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.600 pt4 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.600 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.601 "name": "raid_bdev1", 00:11:50.601 "uuid": "1059a42b-1a9f-4709-bf5b-ab9700b53b01", 00:11:50.601 "strip_size_kb": 0, 00:11:50.601 "state": "online", 00:11:50.601 "raid_level": "raid1", 00:11:50.601 "superblock": true, 00:11:50.601 "num_base_bdevs": 4, 00:11:50.601 "num_base_bdevs_discovered": 3, 00:11:50.601 "num_base_bdevs_operational": 3, 00:11:50.601 "base_bdevs_list": [ 00:11:50.601 { 00:11:50.601 "name": null, 00:11:50.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.601 "is_configured": false, 00:11:50.601 "data_offset": 2048, 00:11:50.601 "data_size": 63488 00:11:50.601 }, 00:11:50.601 { 00:11:50.601 "name": "pt2", 00:11:50.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.601 "is_configured": true, 00:11:50.601 "data_offset": 2048, 00:11:50.601 "data_size": 63488 00:11:50.601 }, 00:11:50.601 { 00:11:50.601 "name": "pt3", 00:11:50.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.601 "is_configured": true, 00:11:50.601 "data_offset": 2048, 00:11:50.601 "data_size": 63488 00:11:50.601 }, 00:11:50.601 { 00:11:50.601 "name": "pt4", 00:11:50.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.601 "is_configured": true, 00:11:50.601 "data_offset": 2048, 00:11:50.601 "data_size": 63488 00:11:50.601 } 00:11:50.601 ] 00:11:50.601 }' 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.601 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.861 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:50.861 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.861 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.861 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:50.861 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.121 [2024-12-12 09:25:24.906404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1059a42b-1a9f-4709-bf5b-ab9700b53b01 '!=' 1059a42b-1a9f-4709-bf5b-ab9700b53b01 ']' 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75656 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75656 ']' 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75656 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75656 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75656' 00:11:51.121 killing process with pid 75656 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75656 00:11:51.121 [2024-12-12 09:25:24.985028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.121 [2024-12-12 09:25:24.985177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.121 09:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75656 00:11:51.121 [2024-12-12 09:25:24.985291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.121 [2024-12-12 09:25:24.985307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:51.387 [2024-12-12 09:25:25.399933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.771 09:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:52.771 00:11:52.771 real 0m8.534s 00:11:52.771 user 0m13.180s 00:11:52.771 sys 0m1.660s 00:11:52.771 09:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.771 09:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.771 ************************************ 00:11:52.771 END TEST raid_superblock_test 00:11:52.771 ************************************ 00:11:52.771 09:25:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:52.771 09:25:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.771 09:25:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.771 09:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.771 ************************************ 00:11:52.771 START TEST raid_read_error_test 00:11:52.771 ************************************ 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZgQ9aSSncC 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76143 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76143 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76143 ']' 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.771 09:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.771 [2024-12-12 09:25:26.767010] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:52.771 [2024-12-12 09:25:26.767214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76143 ] 00:11:53.038 [2024-12-12 09:25:26.945693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.314 [2024-12-12 09:25:27.079787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.314 [2024-12-12 09:25:27.310178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.314 [2024-12-12 09:25:27.310221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.591 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 BaseBdev1_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 true 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 [2024-12-12 09:25:27.652813] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:53.859 [2024-12-12 09:25:27.652896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.859 [2024-12-12 09:25:27.652934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:53.859 [2024-12-12 09:25:27.652946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.859 [2024-12-12 09:25:27.655421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.859 [2024-12-12 09:25:27.655463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.859 BaseBdev1 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 BaseBdev2_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 true 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 [2024-12-12 09:25:27.725783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.859 [2024-12-12 09:25:27.725846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.859 [2024-12-12 09:25:27.725863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.859 [2024-12-12 09:25:27.725875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.859 [2024-12-12 09:25:27.728328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.859 [2024-12-12 09:25:27.728366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.859 BaseBdev2 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 BaseBdev3_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.859 true 00:11:53.859 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.860 [2024-12-12 09:25:27.811156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.860 [2024-12-12 09:25:27.811213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.860 [2024-12-12 09:25:27.811231] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.860 [2024-12-12 09:25:27.811242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.860 [2024-12-12 09:25:27.813597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.860 [2024-12-12 09:25:27.813636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.860 BaseBdev3 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.860 BaseBdev4_malloc 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.860 true 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.860 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.120 [2024-12-12 09:25:27.884154] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:54.120 [2024-12-12 09:25:27.884309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.120 [2024-12-12 09:25:27.884334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.120 [2024-12-12 09:25:27.884347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.120 [2024-12-12 09:25:27.886778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.120 [2024-12-12 09:25:27.886819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:54.120 BaseBdev4 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.120 [2024-12-12 09:25:27.896193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.120 [2024-12-12 09:25:27.898371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.120 [2024-12-12 09:25:27.898491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.120 [2024-12-12 09:25:27.898576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.120 [2024-12-12 09:25:27.898862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:54.120 [2024-12-12 09:25:27.898921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.120 [2024-12-12 09:25:27.899206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:54.120 [2024-12-12 09:25:27.899417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:54.120 [2024-12-12 09:25:27.899459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:54.120 [2024-12-12 09:25:27.899662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.120 "name": "raid_bdev1", 00:11:54.120 "uuid": "5acb9296-c5ff-4c04-a17c-d4d90ccb018f", 00:11:54.120 "strip_size_kb": 0, 00:11:54.120 "state": "online", 00:11:54.120 "raid_level": "raid1", 00:11:54.120 "superblock": true, 00:11:54.120 "num_base_bdevs": 4, 00:11:54.120 "num_base_bdevs_discovered": 4, 00:11:54.120 "num_base_bdevs_operational": 4, 00:11:54.120 "base_bdevs_list": [ 00:11:54.120 { 00:11:54.120 "name": "BaseBdev1", 00:11:54.120 "uuid": "763b6cd6-a414-591a-aa8f-cf7e4baaf549", 00:11:54.120 "is_configured": true, 00:11:54.120 "data_offset": 2048, 00:11:54.120 "data_size": 63488 00:11:54.120 }, 00:11:54.120 { 00:11:54.120 "name": "BaseBdev2", 00:11:54.120 "uuid": "f2a56c5d-4807-59ea-b9ea-d7ccfb4aa3e2", 00:11:54.120 "is_configured": true, 00:11:54.120 "data_offset": 2048, 00:11:54.120 "data_size": 63488 00:11:54.120 }, 00:11:54.120 { 00:11:54.120 "name": "BaseBdev3", 00:11:54.120 "uuid": "f2394253-4ecb-5c7d-96de-5fb95aa3bb48", 00:11:54.120 "is_configured": true, 00:11:54.120 "data_offset": 2048, 00:11:54.120 "data_size": 63488 00:11:54.120 }, 00:11:54.120 { 00:11:54.120 "name": "BaseBdev4", 00:11:54.120 "uuid": "0fab0740-e567-5ab3-a385-c2c08e4cd1f2", 00:11:54.120 "is_configured": true, 00:11:54.120 "data_offset": 2048, 00:11:54.120 "data_size": 63488 00:11:54.120 } 00:11:54.120 ] 00:11:54.120 }' 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.120 09:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.380 09:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:54.380 09:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:54.638 [2024-12-12 09:25:28.452799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.576 "name": "raid_bdev1", 00:11:55.576 "uuid": "5acb9296-c5ff-4c04-a17c-d4d90ccb018f", 00:11:55.576 "strip_size_kb": 0, 00:11:55.576 "state": "online", 00:11:55.576 "raid_level": "raid1", 00:11:55.576 "superblock": true, 00:11:55.576 "num_base_bdevs": 4, 00:11:55.576 "num_base_bdevs_discovered": 4, 00:11:55.576 "num_base_bdevs_operational": 4, 00:11:55.576 "base_bdevs_list": [ 00:11:55.576 { 00:11:55.576 "name": "BaseBdev1", 00:11:55.576 "uuid": "763b6cd6-a414-591a-aa8f-cf7e4baaf549", 00:11:55.576 "is_configured": true, 00:11:55.576 "data_offset": 2048, 00:11:55.576 "data_size": 63488 00:11:55.576 }, 00:11:55.576 { 00:11:55.576 "name": "BaseBdev2", 00:11:55.576 "uuid": "f2a56c5d-4807-59ea-b9ea-d7ccfb4aa3e2", 00:11:55.576 "is_configured": true, 00:11:55.576 "data_offset": 2048, 00:11:55.576 "data_size": 63488 00:11:55.576 }, 00:11:55.576 { 00:11:55.576 "name": "BaseBdev3", 00:11:55.576 "uuid": "f2394253-4ecb-5c7d-96de-5fb95aa3bb48", 00:11:55.576 "is_configured": true, 00:11:55.576 "data_offset": 2048, 00:11:55.576 "data_size": 63488 00:11:55.576 }, 00:11:55.576 { 00:11:55.576 "name": "BaseBdev4", 00:11:55.576 "uuid": "0fab0740-e567-5ab3-a385-c2c08e4cd1f2", 00:11:55.576 "is_configured": true, 00:11:55.576 "data_offset": 2048, 00:11:55.576 "data_size": 63488 00:11:55.576 } 00:11:55.576 ] 00:11:55.576 }' 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.576 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.836 [2024-12-12 09:25:29.834551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.836 [2024-12-12 09:25:29.834682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.836 [2024-12-12 09:25:29.837847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.836 [2024-12-12 09:25:29.837994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.836 [2024-12-12 09:25:29.838148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.836 [2024-12-12 09:25:29.838201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:55.836 { 00:11:55.836 "results": [ 00:11:55.836 { 00:11:55.836 "job": "raid_bdev1", 00:11:55.836 "core_mask": "0x1", 00:11:55.836 "workload": "randrw", 00:11:55.836 "percentage": 50, 00:11:55.836 "status": "finished", 00:11:55.836 "queue_depth": 1, 00:11:55.836 "io_size": 131072, 00:11:55.836 "runtime": 1.382783, 00:11:55.836 "iops": 7943.401097641496, 00:11:55.836 "mibps": 992.925137205187, 00:11:55.836 "io_failed": 0, 00:11:55.836 "io_timeout": 0, 00:11:55.836 "avg_latency_us": 123.30806031480486, 00:11:55.836 "min_latency_us": 23.36419213973799, 00:11:55.836 "max_latency_us": 1509.6174672489083 00:11:55.836 } 00:11:55.836 ], 00:11:55.836 "core_count": 1 00:11:55.836 } 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76143 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76143 ']' 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76143 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.836 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76143 00:11:56.096 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.096 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.096 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76143' 00:11:56.096 killing process with pid 76143 00:11:56.096 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76143 00:11:56.096 [2024-12-12 09:25:29.889300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.096 09:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76143 00:11:56.356 [2024-12-12 09:25:30.233510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZgQ9aSSncC 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.737 09:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:57.737 00:11:57.737 real 0m4.831s 00:11:57.737 user 0m5.544s 00:11:57.737 sys 0m0.730s 00:11:57.737 ************************************ 00:11:57.738 END TEST raid_read_error_test 00:11:57.738 ************************************ 00:11:57.738 09:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.738 09:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.738 09:25:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:57.738 09:25:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:57.738 09:25:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.738 09:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.738 ************************************ 00:11:57.738 START TEST raid_write_error_test 00:11:57.738 ************************************ 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wZUdmZxGbx 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76289 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76289 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76289 ']' 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.738 09:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.738 [2024-12-12 09:25:31.677334] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:57.738 [2024-12-12 09:25:31.677541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:11:57.998 [2024-12-12 09:25:31.855930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.998 [2024-12-12 09:25:31.987789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.258 [2024-12-12 09:25:32.219331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.258 [2024-12-12 09:25:32.219390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.518 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 BaseBdev1_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 true 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 [2024-12-12 09:25:32.576429] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:58.778 [2024-12-12 09:25:32.576500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.778 [2024-12-12 09:25:32.576524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:58.778 [2024-12-12 09:25:32.576536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.778 [2024-12-12 09:25:32.578925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.778 [2024-12-12 09:25:32.579098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.778 BaseBdev1 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 BaseBdev2_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 true 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 [2024-12-12 09:25:32.649369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.778 [2024-12-12 09:25:32.649426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.778 [2024-12-12 09:25:32.649461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:58.778 [2024-12-12 09:25:32.649472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.778 [2024-12-12 09:25:32.651817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.778 [2024-12-12 09:25:32.651932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.778 BaseBdev2 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 BaseBdev3_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 true 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 [2024-12-12 09:25:32.755149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:58.778 [2024-12-12 09:25:32.755201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.778 [2024-12-12 09:25:32.755221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:58.778 [2024-12-12 09:25:32.755232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.778 [2024-12-12 09:25:32.757608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.778 [2024-12-12 09:25:32.757650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:58.778 BaseBdev3 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.778 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.038 BaseBdev4_malloc 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.038 true 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.038 [2024-12-12 09:25:32.826898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:59.038 [2024-12-12 09:25:32.826967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.038 [2024-12-12 09:25:32.826988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:59.038 [2024-12-12 09:25:32.827016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.038 [2024-12-12 09:25:32.829362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.038 [2024-12-12 09:25:32.829404] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:59.038 BaseBdev4 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.038 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.038 [2024-12-12 09:25:32.838925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.038 [2024-12-12 09:25:32.841036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.038 [2024-12-12 09:25:32.841109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.038 [2024-12-12 09:25:32.841167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.038 [2024-12-12 09:25:32.841409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:59.038 [2024-12-12 09:25:32.841431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.038 [2024-12-12 09:25:32.841670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:59.038 [2024-12-12 09:25:32.841851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:59.038 [2024-12-12 09:25:32.841860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:59.039 [2024-12-12 09:25:32.842038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.039 "name": "raid_bdev1", 00:11:59.039 "uuid": "ec51d055-555d-4de0-95fb-36f304436ad0", 00:11:59.039 "strip_size_kb": 0, 00:11:59.039 "state": "online", 00:11:59.039 "raid_level": "raid1", 00:11:59.039 "superblock": true, 00:11:59.039 "num_base_bdevs": 4, 00:11:59.039 "num_base_bdevs_discovered": 4, 00:11:59.039 "num_base_bdevs_operational": 4, 00:11:59.039 "base_bdevs_list": [ 00:11:59.039 { 00:11:59.039 "name": "BaseBdev1", 00:11:59.039 "uuid": "45c7b27f-00ca-58cc-b832-61c4e39cefb9", 00:11:59.039 "is_configured": true, 00:11:59.039 "data_offset": 2048, 00:11:59.039 "data_size": 63488 00:11:59.039 }, 00:11:59.039 { 00:11:59.039 "name": "BaseBdev2", 00:11:59.039 "uuid": "77b850ce-901a-5fd9-b472-9c62c787a19a", 00:11:59.039 "is_configured": true, 00:11:59.039 "data_offset": 2048, 00:11:59.039 "data_size": 63488 00:11:59.039 }, 00:11:59.039 { 00:11:59.039 "name": "BaseBdev3", 00:11:59.039 "uuid": "26d0a0ab-1827-506d-85fd-669010f3a3da", 00:11:59.039 "is_configured": true, 00:11:59.039 "data_offset": 2048, 00:11:59.039 "data_size": 63488 00:11:59.039 }, 00:11:59.039 { 00:11:59.039 "name": "BaseBdev4", 00:11:59.039 "uuid": "e8a05534-f33d-5322-86f2-b41b9088da03", 00:11:59.039 "is_configured": true, 00:11:59.039 "data_offset": 2048, 00:11:59.039 "data_size": 63488 00:11:59.039 } 00:11:59.039 ] 00:11:59.039 }' 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.039 09:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.298 09:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.298 09:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:59.558 [2024-12-12 09:25:33.331481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.498 [2024-12-12 09:25:34.271774] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:00.498 [2024-12-12 09:25:34.271966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.498 [2024-12-12 09:25:34.272255] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.498 "name": "raid_bdev1", 00:12:00.498 "uuid": "ec51d055-555d-4de0-95fb-36f304436ad0", 00:12:00.498 "strip_size_kb": 0, 00:12:00.498 "state": "online", 00:12:00.498 "raid_level": "raid1", 00:12:00.498 "superblock": true, 00:12:00.498 "num_base_bdevs": 4, 00:12:00.498 "num_base_bdevs_discovered": 3, 00:12:00.498 "num_base_bdevs_operational": 3, 00:12:00.498 "base_bdevs_list": [ 00:12:00.498 { 00:12:00.498 "name": null, 00:12:00.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.498 "is_configured": false, 00:12:00.498 "data_offset": 0, 00:12:00.498 "data_size": 63488 00:12:00.498 }, 00:12:00.498 { 00:12:00.498 "name": "BaseBdev2", 00:12:00.498 "uuid": "77b850ce-901a-5fd9-b472-9c62c787a19a", 00:12:00.498 "is_configured": true, 00:12:00.498 "data_offset": 2048, 00:12:00.498 "data_size": 63488 00:12:00.498 }, 00:12:00.498 { 00:12:00.498 "name": "BaseBdev3", 00:12:00.498 "uuid": "26d0a0ab-1827-506d-85fd-669010f3a3da", 00:12:00.498 "is_configured": true, 00:12:00.498 "data_offset": 2048, 00:12:00.498 "data_size": 63488 00:12:00.498 }, 00:12:00.498 { 00:12:00.498 "name": "BaseBdev4", 00:12:00.498 "uuid": "e8a05534-f33d-5322-86f2-b41b9088da03", 00:12:00.498 "is_configured": true, 00:12:00.498 "data_offset": 2048, 00:12:00.498 "data_size": 63488 00:12:00.498 } 00:12:00.498 ] 00:12:00.498 }' 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.498 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.758 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.758 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.759 [2024-12-12 09:25:34.730704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.759 [2024-12-12 09:25:34.730754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.759 [2024-12-12 09:25:34.733686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.759 [2024-12-12 09:25:34.733791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.759 [2024-12-12 09:25:34.733927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.759 [2024-12-12 09:25:34.733986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:00.759 { 00:12:00.759 "results": [ 00:12:00.759 { 00:12:00.759 "job": "raid_bdev1", 00:12:00.759 "core_mask": "0x1", 00:12:00.759 "workload": "randrw", 00:12:00.759 "percentage": 50, 00:12:00.759 "status": "finished", 00:12:00.759 "queue_depth": 1, 00:12:00.759 "io_size": 131072, 00:12:00.759 "runtime": 1.399993, 00:12:00.759 "iops": 8843.615646649661, 00:12:00.759 "mibps": 1105.4519558312077, 00:12:00.759 "io_failed": 0, 00:12:00.759 "io_timeout": 0, 00:12:00.759 "avg_latency_us": 110.45758534788303, 00:12:00.759 "min_latency_us": 23.475982532751093, 00:12:00.759 "max_latency_us": 1509.6174672489083 00:12:00.759 } 00:12:00.759 ], 00:12:00.759 "core_count": 1 00:12:00.759 } 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76289 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76289 ']' 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76289 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76289 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76289' 00:12:00.759 killing process with pid 76289 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76289 00:12:00.759 [2024-12-12 09:25:34.772274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.759 09:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76289 00:12:01.327 [2024-12-12 09:25:35.112005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wZUdmZxGbx 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.708 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:02.709 09:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:02.709 ************************************ 00:12:02.709 END TEST raid_write_error_test 00:12:02.709 ************************************ 00:12:02.709 00:12:02.709 real 0m4.830s 00:12:02.709 user 0m5.485s 00:12:02.709 sys 0m0.733s 00:12:02.709 09:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.709 09:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.709 09:25:36 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:02.709 09:25:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:02.709 09:25:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:02.709 09:25:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:02.709 09:25:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.709 09:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.709 ************************************ 00:12:02.709 START TEST raid_rebuild_test 00:12:02.709 ************************************ 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=76441 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 76441 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 76441 ']' 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.709 09:25:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.709 [2024-12-12 09:25:36.570487] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:02.709 [2024-12-12 09:25:36.570708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76441 ] 00:12:02.709 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:02.709 Zero copy mechanism will not be used. 00:12:02.968 [2024-12-12 09:25:36.749370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.968 [2024-12-12 09:25:36.878413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.228 [2024-12-12 09:25:37.107765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.228 [2024-12-12 09:25:37.107936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.488 BaseBdev1_malloc 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.488 [2024-12-12 09:25:37.438735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.488 [2024-12-12 09:25:37.438827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.488 [2024-12-12 09:25:37.438855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.488 [2024-12-12 09:25:37.438867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.488 [2024-12-12 09:25:37.441500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.488 [2024-12-12 09:25:37.441540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.488 BaseBdev1 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.488 BaseBdev2_malloc 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.488 [2024-12-12 09:25:37.499639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:03.488 [2024-12-12 09:25:37.499727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.488 [2024-12-12 09:25:37.499749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.488 [2024-12-12 09:25:37.499762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.488 [2024-12-12 09:25:37.502154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.488 [2024-12-12 09:25:37.502286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.488 BaseBdev2 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.488 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.749 spare_malloc 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.749 spare_delay 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.749 [2024-12-12 09:25:37.585636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.749 [2024-12-12 09:25:37.585701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.749 [2024-12-12 09:25:37.585737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:03.749 [2024-12-12 09:25:37.585748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.749 [2024-12-12 09:25:37.588111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.749 [2024-12-12 09:25:37.588150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.749 spare 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.749 [2024-12-12 09:25:37.597678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.749 [2024-12-12 09:25:37.599790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.749 [2024-12-12 09:25:37.599875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.749 [2024-12-12 09:25:37.599888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.749 [2024-12-12 09:25:37.600157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.749 [2024-12-12 09:25:37.600315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.749 [2024-12-12 09:25:37.600334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.749 [2024-12-12 09:25:37.600503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.749 "name": "raid_bdev1", 00:12:03.749 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:03.749 "strip_size_kb": 0, 00:12:03.749 "state": "online", 00:12:03.749 "raid_level": "raid1", 00:12:03.749 "superblock": false, 00:12:03.749 "num_base_bdevs": 2, 00:12:03.749 "num_base_bdevs_discovered": 2, 00:12:03.749 "num_base_bdevs_operational": 2, 00:12:03.749 "base_bdevs_list": [ 00:12:03.749 { 00:12:03.749 "name": "BaseBdev1", 00:12:03.749 "uuid": "48f86c9d-fadf-56d7-8011-7c3c00d1f4dd", 00:12:03.749 "is_configured": true, 00:12:03.749 "data_offset": 0, 00:12:03.749 "data_size": 65536 00:12:03.749 }, 00:12:03.749 { 00:12:03.749 "name": "BaseBdev2", 00:12:03.749 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:03.749 "is_configured": true, 00:12:03.749 "data_offset": 0, 00:12:03.749 "data_size": 65536 00:12:03.749 } 00:12:03.749 ] 00:12:03.749 }' 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.749 09:25:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.321 [2024-12-12 09:25:38.073130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.321 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.322 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:04.322 [2024-12-12 09:25:38.324507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:04.322 /dev/nbd0 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.582 1+0 records in 00:12:04.582 1+0 records out 00:12:04.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596597 s, 6.9 MB/s 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:04.582 09:25:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:08.781 65536+0 records in 00:12:08.781 65536+0 records out 00:12:08.781 33554432 bytes (34 MB, 32 MiB) copied, 4.18311 s, 8.0 MB/s 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:08.781 [2024-12-12 09:25:42.796095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.781 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.042 [2024-12-12 09:25:42.816170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.042 "name": "raid_bdev1", 00:12:09.042 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:09.042 "strip_size_kb": 0, 00:12:09.042 "state": "online", 00:12:09.042 "raid_level": "raid1", 00:12:09.042 "superblock": false, 00:12:09.042 "num_base_bdevs": 2, 00:12:09.042 "num_base_bdevs_discovered": 1, 00:12:09.042 "num_base_bdevs_operational": 1, 00:12:09.042 "base_bdevs_list": [ 00:12:09.042 { 00:12:09.042 "name": null, 00:12:09.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.042 "is_configured": false, 00:12:09.042 "data_offset": 0, 00:12:09.042 "data_size": 65536 00:12:09.042 }, 00:12:09.042 { 00:12:09.042 "name": "BaseBdev2", 00:12:09.042 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:09.042 "is_configured": true, 00:12:09.042 "data_offset": 0, 00:12:09.042 "data_size": 65536 00:12:09.042 } 00:12:09.042 ] 00:12:09.042 }' 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.042 09:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.303 09:25:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.303 09:25:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.303 09:25:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.303 [2024-12-12 09:25:43.291762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.303 [2024-12-12 09:25:43.310193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:09.303 09:25:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.303 09:25:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:09.303 [2024-12-12 09:25:43.312282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.683 "name": "raid_bdev1", 00:12:10.683 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:10.683 "strip_size_kb": 0, 00:12:10.683 "state": "online", 00:12:10.683 "raid_level": "raid1", 00:12:10.683 "superblock": false, 00:12:10.683 "num_base_bdevs": 2, 00:12:10.683 "num_base_bdevs_discovered": 2, 00:12:10.683 "num_base_bdevs_operational": 2, 00:12:10.683 "process": { 00:12:10.683 "type": "rebuild", 00:12:10.683 "target": "spare", 00:12:10.683 "progress": { 00:12:10.683 "blocks": 20480, 00:12:10.683 "percent": 31 00:12:10.683 } 00:12:10.683 }, 00:12:10.683 "base_bdevs_list": [ 00:12:10.683 { 00:12:10.683 "name": "spare", 00:12:10.683 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:10.683 "is_configured": true, 00:12:10.683 "data_offset": 0, 00:12:10.683 "data_size": 65536 00:12:10.683 }, 00:12:10.683 { 00:12:10.683 "name": "BaseBdev2", 00:12:10.683 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:10.683 "is_configured": true, 00:12:10.683 "data_offset": 0, 00:12:10.683 "data_size": 65536 00:12:10.683 } 00:12:10.683 ] 00:12:10.683 }' 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.683 [2024-12-12 09:25:44.475787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.683 [2024-12-12 09:25:44.521275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:10.683 [2024-12-12 09:25:44.521391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.683 [2024-12-12 09:25:44.521427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.683 [2024-12-12 09:25:44.521453] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.683 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.684 "name": "raid_bdev1", 00:12:10.684 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:10.684 "strip_size_kb": 0, 00:12:10.684 "state": "online", 00:12:10.684 "raid_level": "raid1", 00:12:10.684 "superblock": false, 00:12:10.684 "num_base_bdevs": 2, 00:12:10.684 "num_base_bdevs_discovered": 1, 00:12:10.684 "num_base_bdevs_operational": 1, 00:12:10.684 "base_bdevs_list": [ 00:12:10.684 { 00:12:10.684 "name": null, 00:12:10.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.684 "is_configured": false, 00:12:10.684 "data_offset": 0, 00:12:10.684 "data_size": 65536 00:12:10.684 }, 00:12:10.684 { 00:12:10.684 "name": "BaseBdev2", 00:12:10.684 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:10.684 "is_configured": true, 00:12:10.684 "data_offset": 0, 00:12:10.684 "data_size": 65536 00:12:10.684 } 00:12:10.684 ] 00:12:10.684 }' 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.684 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.253 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.254 09:25:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.254 "name": "raid_bdev1", 00:12:11.254 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:11.254 "strip_size_kb": 0, 00:12:11.254 "state": "online", 00:12:11.254 "raid_level": "raid1", 00:12:11.254 "superblock": false, 00:12:11.254 "num_base_bdevs": 2, 00:12:11.254 "num_base_bdevs_discovered": 1, 00:12:11.254 "num_base_bdevs_operational": 1, 00:12:11.254 "base_bdevs_list": [ 00:12:11.254 { 00:12:11.254 "name": null, 00:12:11.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.254 "is_configured": false, 00:12:11.254 "data_offset": 0, 00:12:11.254 "data_size": 65536 00:12:11.254 }, 00:12:11.254 { 00:12:11.254 "name": "BaseBdev2", 00:12:11.254 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:11.254 "is_configured": true, 00:12:11.254 "data_offset": 0, 00:12:11.254 "data_size": 65536 00:12:11.254 } 00:12:11.254 ] 00:12:11.254 }' 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.254 [2024-12-12 09:25:45.132913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.254 [2024-12-12 09:25:45.150674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.254 09:25:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:11.254 [2024-12-12 09:25:45.152808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.192 "name": "raid_bdev1", 00:12:12.192 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:12.192 "strip_size_kb": 0, 00:12:12.192 "state": "online", 00:12:12.192 "raid_level": "raid1", 00:12:12.192 "superblock": false, 00:12:12.192 "num_base_bdevs": 2, 00:12:12.192 "num_base_bdevs_discovered": 2, 00:12:12.192 "num_base_bdevs_operational": 2, 00:12:12.192 "process": { 00:12:12.192 "type": "rebuild", 00:12:12.192 "target": "spare", 00:12:12.192 "progress": { 00:12:12.192 "blocks": 20480, 00:12:12.192 "percent": 31 00:12:12.192 } 00:12:12.192 }, 00:12:12.192 "base_bdevs_list": [ 00:12:12.192 { 00:12:12.192 "name": "spare", 00:12:12.192 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:12.192 "is_configured": true, 00:12:12.192 "data_offset": 0, 00:12:12.192 "data_size": 65536 00:12:12.192 }, 00:12:12.192 { 00:12:12.192 "name": "BaseBdev2", 00:12:12.192 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:12.192 "is_configured": true, 00:12:12.192 "data_offset": 0, 00:12:12.192 "data_size": 65536 00:12:12.192 } 00:12:12.192 ] 00:12:12.192 }' 00:12:12.192 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=372 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.452 "name": "raid_bdev1", 00:12:12.452 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:12.452 "strip_size_kb": 0, 00:12:12.452 "state": "online", 00:12:12.452 "raid_level": "raid1", 00:12:12.452 "superblock": false, 00:12:12.452 "num_base_bdevs": 2, 00:12:12.452 "num_base_bdevs_discovered": 2, 00:12:12.452 "num_base_bdevs_operational": 2, 00:12:12.452 "process": { 00:12:12.452 "type": "rebuild", 00:12:12.452 "target": "spare", 00:12:12.452 "progress": { 00:12:12.452 "blocks": 22528, 00:12:12.452 "percent": 34 00:12:12.452 } 00:12:12.452 }, 00:12:12.452 "base_bdevs_list": [ 00:12:12.452 { 00:12:12.452 "name": "spare", 00:12:12.452 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 0, 00:12:12.452 "data_size": 65536 00:12:12.452 }, 00:12:12.452 { 00:12:12.452 "name": "BaseBdev2", 00:12:12.452 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:12.452 "is_configured": true, 00:12:12.452 "data_offset": 0, 00:12:12.452 "data_size": 65536 00:12:12.452 } 00:12:12.452 ] 00:12:12.452 }' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.452 09:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.834 "name": "raid_bdev1", 00:12:13.834 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:13.834 "strip_size_kb": 0, 00:12:13.834 "state": "online", 00:12:13.834 "raid_level": "raid1", 00:12:13.834 "superblock": false, 00:12:13.834 "num_base_bdevs": 2, 00:12:13.834 "num_base_bdevs_discovered": 2, 00:12:13.834 "num_base_bdevs_operational": 2, 00:12:13.834 "process": { 00:12:13.834 "type": "rebuild", 00:12:13.834 "target": "spare", 00:12:13.834 "progress": { 00:12:13.834 "blocks": 45056, 00:12:13.834 "percent": 68 00:12:13.834 } 00:12:13.834 }, 00:12:13.834 "base_bdevs_list": [ 00:12:13.834 { 00:12:13.834 "name": "spare", 00:12:13.834 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:13.834 "is_configured": true, 00:12:13.834 "data_offset": 0, 00:12:13.834 "data_size": 65536 00:12:13.834 }, 00:12:13.834 { 00:12:13.834 "name": "BaseBdev2", 00:12:13.834 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:13.834 "is_configured": true, 00:12:13.834 "data_offset": 0, 00:12:13.834 "data_size": 65536 00:12:13.834 } 00:12:13.834 ] 00:12:13.834 }' 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.834 09:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.403 [2024-12-12 09:25:48.376046] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:14.403 [2024-12-12 09:25:48.376136] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:14.403 [2024-12-12 09:25:48.376192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.664 "name": "raid_bdev1", 00:12:14.664 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:14.664 "strip_size_kb": 0, 00:12:14.664 "state": "online", 00:12:14.664 "raid_level": "raid1", 00:12:14.664 "superblock": false, 00:12:14.664 "num_base_bdevs": 2, 00:12:14.664 "num_base_bdevs_discovered": 2, 00:12:14.664 "num_base_bdevs_operational": 2, 00:12:14.664 "base_bdevs_list": [ 00:12:14.664 { 00:12:14.664 "name": "spare", 00:12:14.664 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:14.664 "is_configured": true, 00:12:14.664 "data_offset": 0, 00:12:14.664 "data_size": 65536 00:12:14.664 }, 00:12:14.664 { 00:12:14.664 "name": "BaseBdev2", 00:12:14.664 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:14.664 "is_configured": true, 00:12:14.664 "data_offset": 0, 00:12:14.664 "data_size": 65536 00:12:14.664 } 00:12:14.664 ] 00:12:14.664 }' 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:14.664 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.924 "name": "raid_bdev1", 00:12:14.924 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:14.924 "strip_size_kb": 0, 00:12:14.924 "state": "online", 00:12:14.924 "raid_level": "raid1", 00:12:14.924 "superblock": false, 00:12:14.924 "num_base_bdevs": 2, 00:12:14.924 "num_base_bdevs_discovered": 2, 00:12:14.924 "num_base_bdevs_operational": 2, 00:12:14.924 "base_bdevs_list": [ 00:12:14.924 { 00:12:14.924 "name": "spare", 00:12:14.924 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:14.924 "is_configured": true, 00:12:14.924 "data_offset": 0, 00:12:14.924 "data_size": 65536 00:12:14.924 }, 00:12:14.924 { 00:12:14.924 "name": "BaseBdev2", 00:12:14.924 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:14.924 "is_configured": true, 00:12:14.924 "data_offset": 0, 00:12:14.924 "data_size": 65536 00:12:14.924 } 00:12:14.924 ] 00:12:14.924 }' 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.924 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.925 "name": "raid_bdev1", 00:12:14.925 "uuid": "83a01f32-1a84-4ace-98ee-dc9166996271", 00:12:14.925 "strip_size_kb": 0, 00:12:14.925 "state": "online", 00:12:14.925 "raid_level": "raid1", 00:12:14.925 "superblock": false, 00:12:14.925 "num_base_bdevs": 2, 00:12:14.925 "num_base_bdevs_discovered": 2, 00:12:14.925 "num_base_bdevs_operational": 2, 00:12:14.925 "base_bdevs_list": [ 00:12:14.925 { 00:12:14.925 "name": "spare", 00:12:14.925 "uuid": "e08824fb-a139-5fe1-99fb-eebac5d9f0eb", 00:12:14.925 "is_configured": true, 00:12:14.925 "data_offset": 0, 00:12:14.925 "data_size": 65536 00:12:14.925 }, 00:12:14.925 { 00:12:14.925 "name": "BaseBdev2", 00:12:14.925 "uuid": "0b3c670b-1981-51d4-9f7c-ddd605732521", 00:12:14.925 "is_configured": true, 00:12:14.925 "data_offset": 0, 00:12:14.925 "data_size": 65536 00:12:14.925 } 00:12:14.925 ] 00:12:14.925 }' 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.925 09:25:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 [2024-12-12 09:25:49.262277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.495 [2024-12-12 09:25:49.262314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.495 [2024-12-12 09:25:49.262413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.495 [2024-12-12 09:25:49.262489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.495 [2024-12-12 09:25:49.262500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:15.495 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:15.495 /dev/nbd0 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.755 1+0 records in 00:12:15.755 1+0 records out 00:12:15.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00281949 s, 1.5 MB/s 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:15.755 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:15.755 /dev/nbd1 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:16.015 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.016 1+0 records in 00:12:16.016 1+0 records out 00:12:16.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450559 s, 9.1 MB/s 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:16.016 09:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.016 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:16.281 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:16.281 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:16.281 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:16.281 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.282 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 76441 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 76441 ']' 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 76441 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76441 00:12:16.554 killing process with pid 76441 00:12:16.554 Received shutdown signal, test time was about 60.000000 seconds 00:12:16.554 00:12:16.554 Latency(us) 00:12:16.554 [2024-12-12T09:25:50.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.554 [2024-12-12T09:25:50.577Z] =================================================================================================================== 00:12:16.554 [2024-12-12T09:25:50.577Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76441' 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 76441 00:12:16.554 [2024-12-12 09:25:50.487867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.554 09:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 76441 00:12:16.829 [2024-12-12 09:25:50.810695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.210 09:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:18.210 00:12:18.210 real 0m15.526s 00:12:18.210 user 0m17.338s 00:12:18.210 sys 0m3.110s 00:12:18.210 09:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.210 09:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.210 ************************************ 00:12:18.210 END TEST raid_rebuild_test 00:12:18.210 ************************************ 00:12:18.211 09:25:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:18.211 09:25:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:18.211 09:25:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.211 09:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.211 ************************************ 00:12:18.211 START TEST raid_rebuild_test_sb 00:12:18.211 ************************************ 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76863 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76863 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76863 ']' 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.211 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:18.211 Zero copy mechanism will not be used. 00:12:18.211 [2024-12-12 09:25:52.175065] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:18.211 [2024-12-12 09:25:52.175185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76863 ] 00:12:18.470 [2024-12-12 09:25:52.351440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.470 [2024-12-12 09:25:52.488819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.729 [2024-12-12 09:25:52.716424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.729 [2024-12-12 09:25:52.716527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.989 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.248 BaseBdev1_malloc 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 [2024-12-12 09:25:53.050568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.249 [2024-12-12 09:25:53.050646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.249 [2024-12-12 09:25:53.050671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.249 [2024-12-12 09:25:53.050683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.249 [2024-12-12 09:25:53.053088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.249 [2024-12-12 09:25:53.053129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.249 BaseBdev1 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 BaseBdev2_malloc 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 [2024-12-12 09:25:53.110802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:19.249 [2024-12-12 09:25:53.110880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.249 [2024-12-12 09:25:53.110902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.249 [2024-12-12 09:25:53.110912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.249 [2024-12-12 09:25:53.113340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.249 [2024-12-12 09:25:53.113378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.249 BaseBdev2 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 spare_malloc 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 spare_delay 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 [2024-12-12 09:25:53.194126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.249 [2024-12-12 09:25:53.194202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.249 [2024-12-12 09:25:53.194221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:19.249 [2024-12-12 09:25:53.194233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.249 [2024-12-12 09:25:53.196584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.249 [2024-12-12 09:25:53.196705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.249 spare 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 [2024-12-12 09:25:53.206167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.249 [2024-12-12 09:25:53.208256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.249 [2024-12-12 09:25:53.208425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.249 [2024-12-12 09:25:53.208440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.249 [2024-12-12 09:25:53.208676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.249 [2024-12-12 09:25:53.208834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.249 [2024-12-12 09:25:53.208843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:19.249 [2024-12-12 09:25:53.209003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.249 "name": "raid_bdev1", 00:12:19.249 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:19.249 "strip_size_kb": 0, 00:12:19.249 "state": "online", 00:12:19.249 "raid_level": "raid1", 00:12:19.249 "superblock": true, 00:12:19.249 "num_base_bdevs": 2, 00:12:19.249 "num_base_bdevs_discovered": 2, 00:12:19.249 "num_base_bdevs_operational": 2, 00:12:19.249 "base_bdevs_list": [ 00:12:19.249 { 00:12:19.249 "name": "BaseBdev1", 00:12:19.249 "uuid": "16349a5e-054f-500a-8087-b2b2fe117c89", 00:12:19.249 "is_configured": true, 00:12:19.249 "data_offset": 2048, 00:12:19.249 "data_size": 63488 00:12:19.249 }, 00:12:19.249 { 00:12:19.249 "name": "BaseBdev2", 00:12:19.249 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:19.249 "is_configured": true, 00:12:19.249 "data_offset": 2048, 00:12:19.249 "data_size": 63488 00:12:19.249 } 00:12:19.249 ] 00:12:19.249 }' 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.249 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.818 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.818 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.819 [2024-12-12 09:25:53.669682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.819 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:20.079 [2024-12-12 09:25:53.957013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:20.079 /dev/nbd0 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:20.079 09:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.079 1+0 records in 00:12:20.079 1+0 records out 00:12:20.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483114 s, 8.5 MB/s 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:20.079 09:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:24.276 63488+0 records in 00:12:24.276 63488+0 records out 00:12:24.276 32505856 bytes (33 MB, 31 MiB) copied, 4.08049 s, 8.0 MB/s 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.276 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.536 [2024-12-12 09:25:58.302201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.536 [2024-12-12 09:25:58.341453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.536 "name": "raid_bdev1", 00:12:24.536 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:24.536 "strip_size_kb": 0, 00:12:24.536 "state": "online", 00:12:24.536 "raid_level": "raid1", 00:12:24.536 "superblock": true, 00:12:24.536 "num_base_bdevs": 2, 00:12:24.536 "num_base_bdevs_discovered": 1, 00:12:24.536 "num_base_bdevs_operational": 1, 00:12:24.536 "base_bdevs_list": [ 00:12:24.536 { 00:12:24.536 "name": null, 00:12:24.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.536 "is_configured": false, 00:12:24.536 "data_offset": 0, 00:12:24.536 "data_size": 63488 00:12:24.536 }, 00:12:24.536 { 00:12:24.536 "name": "BaseBdev2", 00:12:24.536 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:24.536 "is_configured": true, 00:12:24.536 "data_offset": 2048, 00:12:24.536 "data_size": 63488 00:12:24.536 } 00:12:24.536 ] 00:12:24.536 }' 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.536 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.796 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.796 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.796 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.796 [2024-12-12 09:25:58.792737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.796 [2024-12-12 09:25:58.811797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:24.796 09:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.796 09:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.796 [2024-12-12 09:25:58.813932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.177 09:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.178 "name": "raid_bdev1", 00:12:26.178 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:26.178 "strip_size_kb": 0, 00:12:26.178 "state": "online", 00:12:26.178 "raid_level": "raid1", 00:12:26.178 "superblock": true, 00:12:26.178 "num_base_bdevs": 2, 00:12:26.178 "num_base_bdevs_discovered": 2, 00:12:26.178 "num_base_bdevs_operational": 2, 00:12:26.178 "process": { 00:12:26.178 "type": "rebuild", 00:12:26.178 "target": "spare", 00:12:26.178 "progress": { 00:12:26.178 "blocks": 20480, 00:12:26.178 "percent": 32 00:12:26.178 } 00:12:26.178 }, 00:12:26.178 "base_bdevs_list": [ 00:12:26.178 { 00:12:26.178 "name": "spare", 00:12:26.178 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:26.178 "is_configured": true, 00:12:26.178 "data_offset": 2048, 00:12:26.178 "data_size": 63488 00:12:26.178 }, 00:12:26.178 { 00:12:26.178 "name": "BaseBdev2", 00:12:26.178 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:26.178 "is_configured": true, 00:12:26.178 "data_offset": 2048, 00:12:26.178 "data_size": 63488 00:12:26.178 } 00:12:26.178 ] 00:12:26.178 }' 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.178 09:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 [2024-12-12 09:25:59.977088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.178 [2024-12-12 09:26:00.022644] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:26.178 [2024-12-12 09:26:00.022723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.178 [2024-12-12 09:26:00.022739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.178 [2024-12-12 09:26:00.022753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.178 "name": "raid_bdev1", 00:12:26.178 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:26.178 "strip_size_kb": 0, 00:12:26.178 "state": "online", 00:12:26.178 "raid_level": "raid1", 00:12:26.178 "superblock": true, 00:12:26.178 "num_base_bdevs": 2, 00:12:26.178 "num_base_bdevs_discovered": 1, 00:12:26.178 "num_base_bdevs_operational": 1, 00:12:26.178 "base_bdevs_list": [ 00:12:26.178 { 00:12:26.178 "name": null, 00:12:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.178 "is_configured": false, 00:12:26.178 "data_offset": 0, 00:12:26.178 "data_size": 63488 00:12:26.178 }, 00:12:26.178 { 00:12:26.178 "name": "BaseBdev2", 00:12:26.178 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:26.178 "is_configured": true, 00:12:26.178 "data_offset": 2048, 00:12:26.178 "data_size": 63488 00:12:26.178 } 00:12:26.178 ] 00:12:26.178 }' 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.178 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.746 "name": "raid_bdev1", 00:12:26.746 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:26.746 "strip_size_kb": 0, 00:12:26.746 "state": "online", 00:12:26.746 "raid_level": "raid1", 00:12:26.746 "superblock": true, 00:12:26.746 "num_base_bdevs": 2, 00:12:26.746 "num_base_bdevs_discovered": 1, 00:12:26.746 "num_base_bdevs_operational": 1, 00:12:26.746 "base_bdevs_list": [ 00:12:26.746 { 00:12:26.746 "name": null, 00:12:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.746 "is_configured": false, 00:12:26.746 "data_offset": 0, 00:12:26.746 "data_size": 63488 00:12:26.746 }, 00:12:26.746 { 00:12:26.746 "name": "BaseBdev2", 00:12:26.746 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:26.746 "is_configured": true, 00:12:26.746 "data_offset": 2048, 00:12:26.746 "data_size": 63488 00:12:26.746 } 00:12:26.746 ] 00:12:26.746 }' 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.746 [2024-12-12 09:26:00.634718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.746 [2024-12-12 09:26:00.652399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:26.746 [2024-12-12 09:26:00.654486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.746 09:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:27.686 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.686 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.686 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.686 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.687 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.946 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.947 "name": "raid_bdev1", 00:12:27.947 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:27.947 "strip_size_kb": 0, 00:12:27.947 "state": "online", 00:12:27.947 "raid_level": "raid1", 00:12:27.947 "superblock": true, 00:12:27.947 "num_base_bdevs": 2, 00:12:27.947 "num_base_bdevs_discovered": 2, 00:12:27.947 "num_base_bdevs_operational": 2, 00:12:27.947 "process": { 00:12:27.947 "type": "rebuild", 00:12:27.947 "target": "spare", 00:12:27.947 "progress": { 00:12:27.947 "blocks": 20480, 00:12:27.947 "percent": 32 00:12:27.947 } 00:12:27.947 }, 00:12:27.947 "base_bdevs_list": [ 00:12:27.947 { 00:12:27.947 "name": "spare", 00:12:27.947 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "BaseBdev2", 00:12:27.947 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 } 00:12:27.947 ] 00:12:27.947 }' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:27.947 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=387 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.947 "name": "raid_bdev1", 00:12:27.947 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:27.947 "strip_size_kb": 0, 00:12:27.947 "state": "online", 00:12:27.947 "raid_level": "raid1", 00:12:27.947 "superblock": true, 00:12:27.947 "num_base_bdevs": 2, 00:12:27.947 "num_base_bdevs_discovered": 2, 00:12:27.947 "num_base_bdevs_operational": 2, 00:12:27.947 "process": { 00:12:27.947 "type": "rebuild", 00:12:27.947 "target": "spare", 00:12:27.947 "progress": { 00:12:27.947 "blocks": 22528, 00:12:27.947 "percent": 35 00:12:27.947 } 00:12:27.947 }, 00:12:27.947 "base_bdevs_list": [ 00:12:27.947 { 00:12:27.947 "name": "spare", 00:12:27.947 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "BaseBdev2", 00:12:27.947 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 } 00:12:27.947 ] 00:12:27.947 }' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.947 09:26:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.331 "name": "raid_bdev1", 00:12:29.331 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:29.331 "strip_size_kb": 0, 00:12:29.331 "state": "online", 00:12:29.331 "raid_level": "raid1", 00:12:29.331 "superblock": true, 00:12:29.331 "num_base_bdevs": 2, 00:12:29.331 "num_base_bdevs_discovered": 2, 00:12:29.331 "num_base_bdevs_operational": 2, 00:12:29.331 "process": { 00:12:29.331 "type": "rebuild", 00:12:29.331 "target": "spare", 00:12:29.331 "progress": { 00:12:29.331 "blocks": 45056, 00:12:29.331 "percent": 70 00:12:29.331 } 00:12:29.331 }, 00:12:29.331 "base_bdevs_list": [ 00:12:29.331 { 00:12:29.331 "name": "spare", 00:12:29.331 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:29.331 "is_configured": true, 00:12:29.331 "data_offset": 2048, 00:12:29.331 "data_size": 63488 00:12:29.331 }, 00:12:29.331 { 00:12:29.331 "name": "BaseBdev2", 00:12:29.331 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:29.331 "is_configured": true, 00:12:29.331 "data_offset": 2048, 00:12:29.331 "data_size": 63488 00:12:29.331 } 00:12:29.331 ] 00:12:29.331 }' 00:12:29.331 09:26:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.331 09:26:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.331 09:26:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.331 09:26:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.331 09:26:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.901 [2024-12-12 09:26:03.776884] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.901 [2024-12-12 09:26:03.777104] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.901 [2024-12-12 09:26:03.777244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.160 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.160 "name": "raid_bdev1", 00:12:30.160 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:30.160 "strip_size_kb": 0, 00:12:30.160 "state": "online", 00:12:30.160 "raid_level": "raid1", 00:12:30.160 "superblock": true, 00:12:30.160 "num_base_bdevs": 2, 00:12:30.160 "num_base_bdevs_discovered": 2, 00:12:30.160 "num_base_bdevs_operational": 2, 00:12:30.160 "base_bdevs_list": [ 00:12:30.160 { 00:12:30.160 "name": "spare", 00:12:30.160 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:30.160 "is_configured": true, 00:12:30.160 "data_offset": 2048, 00:12:30.160 "data_size": 63488 00:12:30.160 }, 00:12:30.160 { 00:12:30.160 "name": "BaseBdev2", 00:12:30.160 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:30.160 "is_configured": true, 00:12:30.161 "data_offset": 2048, 00:12:30.161 "data_size": 63488 00:12:30.161 } 00:12:30.161 ] 00:12:30.161 }' 00:12:30.161 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.421 "name": "raid_bdev1", 00:12:30.421 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:30.421 "strip_size_kb": 0, 00:12:30.421 "state": "online", 00:12:30.421 "raid_level": "raid1", 00:12:30.421 "superblock": true, 00:12:30.421 "num_base_bdevs": 2, 00:12:30.421 "num_base_bdevs_discovered": 2, 00:12:30.421 "num_base_bdevs_operational": 2, 00:12:30.421 "base_bdevs_list": [ 00:12:30.421 { 00:12:30.421 "name": "spare", 00:12:30.421 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:30.421 "is_configured": true, 00:12:30.421 "data_offset": 2048, 00:12:30.421 "data_size": 63488 00:12:30.421 }, 00:12:30.421 { 00:12:30.421 "name": "BaseBdev2", 00:12:30.421 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:30.421 "is_configured": true, 00:12:30.421 "data_offset": 2048, 00:12:30.421 "data_size": 63488 00:12:30.421 } 00:12:30.421 ] 00:12:30.421 }' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.421 "name": "raid_bdev1", 00:12:30.421 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:30.421 "strip_size_kb": 0, 00:12:30.421 "state": "online", 00:12:30.421 "raid_level": "raid1", 00:12:30.421 "superblock": true, 00:12:30.421 "num_base_bdevs": 2, 00:12:30.421 "num_base_bdevs_discovered": 2, 00:12:30.421 "num_base_bdevs_operational": 2, 00:12:30.421 "base_bdevs_list": [ 00:12:30.421 { 00:12:30.421 "name": "spare", 00:12:30.421 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:30.421 "is_configured": true, 00:12:30.421 "data_offset": 2048, 00:12:30.421 "data_size": 63488 00:12:30.421 }, 00:12:30.421 { 00:12:30.421 "name": "BaseBdev2", 00:12:30.421 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:30.421 "is_configured": true, 00:12:30.421 "data_offset": 2048, 00:12:30.421 "data_size": 63488 00:12:30.421 } 00:12:30.421 ] 00:12:30.421 }' 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.421 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.991 [2024-12-12 09:26:04.812681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.991 [2024-12-12 09:26:04.812779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.991 [2024-12-12 09:26:04.812891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.991 [2024-12-12 09:26:04.813009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.991 [2024-12-12 09:26:04.813058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.991 09:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:31.249 /dev/nbd0 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.249 1+0 records in 00:12:31.249 1+0 records out 00:12:31.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418452 s, 9.8 MB/s 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.249 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:31.508 /dev/nbd1 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.508 1+0 records in 00:12:31.508 1+0 records out 00:12:31.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271805 s, 15.1 MB/s 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.508 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.767 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.026 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.027 [2024-12-12 09:26:05.957137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.027 [2024-12-12 09:26:05.957240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.027 [2024-12-12 09:26:05.957269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:32.027 [2024-12-12 09:26:05.957278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.027 [2024-12-12 09:26:05.959777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.027 [2024-12-12 09:26:05.959817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.027 [2024-12-12 09:26:05.959918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:32.027 [2024-12-12 09:26:05.960011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.027 [2024-12-12 09:26:05.960177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.027 spare 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.027 09:26:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.286 [2024-12-12 09:26:06.060084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:32.286 [2024-12-12 09:26:06.060115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.286 [2024-12-12 09:26:06.060409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:32.286 [2024-12-12 09:26:06.060610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:32.286 [2024-12-12 09:26:06.060620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:32.286 [2024-12-12 09:26:06.060787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.286 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.287 "name": "raid_bdev1", 00:12:32.287 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:32.287 "strip_size_kb": 0, 00:12:32.287 "state": "online", 00:12:32.287 "raid_level": "raid1", 00:12:32.287 "superblock": true, 00:12:32.287 "num_base_bdevs": 2, 00:12:32.287 "num_base_bdevs_discovered": 2, 00:12:32.287 "num_base_bdevs_operational": 2, 00:12:32.287 "base_bdevs_list": [ 00:12:32.287 { 00:12:32.287 "name": "spare", 00:12:32.287 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:32.287 "is_configured": true, 00:12:32.287 "data_offset": 2048, 00:12:32.287 "data_size": 63488 00:12:32.287 }, 00:12:32.287 { 00:12:32.287 "name": "BaseBdev2", 00:12:32.287 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:32.287 "is_configured": true, 00:12:32.287 "data_offset": 2048, 00:12:32.287 "data_size": 63488 00:12:32.287 } 00:12:32.287 ] 00:12:32.287 }' 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.287 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.546 "name": "raid_bdev1", 00:12:32.546 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:32.546 "strip_size_kb": 0, 00:12:32.546 "state": "online", 00:12:32.546 "raid_level": "raid1", 00:12:32.546 "superblock": true, 00:12:32.546 "num_base_bdevs": 2, 00:12:32.546 "num_base_bdevs_discovered": 2, 00:12:32.546 "num_base_bdevs_operational": 2, 00:12:32.546 "base_bdevs_list": [ 00:12:32.546 { 00:12:32.546 "name": "spare", 00:12:32.546 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:32.546 "is_configured": true, 00:12:32.546 "data_offset": 2048, 00:12:32.546 "data_size": 63488 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "BaseBdev2", 00:12:32.546 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:32.546 "is_configured": true, 00:12:32.546 "data_offset": 2048, 00:12:32.546 "data_size": 63488 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }' 00:12:32.546 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.806 [2024-12-12 09:26:06.664005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.806 "name": "raid_bdev1", 00:12:32.806 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:32.806 "strip_size_kb": 0, 00:12:32.806 "state": "online", 00:12:32.806 "raid_level": "raid1", 00:12:32.806 "superblock": true, 00:12:32.806 "num_base_bdevs": 2, 00:12:32.806 "num_base_bdevs_discovered": 1, 00:12:32.806 "num_base_bdevs_operational": 1, 00:12:32.806 "base_bdevs_list": [ 00:12:32.806 { 00:12:32.806 "name": null, 00:12:32.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.806 "is_configured": false, 00:12:32.806 "data_offset": 0, 00:12:32.806 "data_size": 63488 00:12:32.806 }, 00:12:32.806 { 00:12:32.806 "name": "BaseBdev2", 00:12:32.806 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:32.806 "is_configured": true, 00:12:32.806 "data_offset": 2048, 00:12:32.806 "data_size": 63488 00:12:32.806 } 00:12:32.806 ] 00:12:32.806 }' 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.806 09:26:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.374 09:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.374 09:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.374 09:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.374 [2024-12-12 09:26:07.123280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.374 [2024-12-12 09:26:07.123540] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:33.374 [2024-12-12 09:26:07.123603] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.374 [2024-12-12 09:26:07.123718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.374 [2024-12-12 09:26:07.140496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:33.374 09:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.374 09:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:33.374 [2024-12-12 09:26:07.142680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.311 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.311 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.311 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.312 "name": "raid_bdev1", 00:12:34.312 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:34.312 "strip_size_kb": 0, 00:12:34.312 "state": "online", 00:12:34.312 "raid_level": "raid1", 00:12:34.312 "superblock": true, 00:12:34.312 "num_base_bdevs": 2, 00:12:34.312 "num_base_bdevs_discovered": 2, 00:12:34.312 "num_base_bdevs_operational": 2, 00:12:34.312 "process": { 00:12:34.312 "type": "rebuild", 00:12:34.312 "target": "spare", 00:12:34.312 "progress": { 00:12:34.312 "blocks": 20480, 00:12:34.312 "percent": 32 00:12:34.312 } 00:12:34.312 }, 00:12:34.312 "base_bdevs_list": [ 00:12:34.312 { 00:12:34.312 "name": "spare", 00:12:34.312 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:34.312 "is_configured": true, 00:12:34.312 "data_offset": 2048, 00:12:34.312 "data_size": 63488 00:12:34.312 }, 00:12:34.312 { 00:12:34.312 "name": "BaseBdev2", 00:12:34.312 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:34.312 "is_configured": true, 00:12:34.312 "data_offset": 2048, 00:12:34.312 "data_size": 63488 00:12:34.312 } 00:12:34.312 ] 00:12:34.312 }' 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.312 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.312 [2024-12-12 09:26:08.309700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.571 [2024-12-12 09:26:08.351237] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.571 [2024-12-12 09:26:08.351302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.571 [2024-12-12 09:26:08.351317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.571 [2024-12-12 09:26:08.351327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.571 "name": "raid_bdev1", 00:12:34.571 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:34.571 "strip_size_kb": 0, 00:12:34.571 "state": "online", 00:12:34.571 "raid_level": "raid1", 00:12:34.571 "superblock": true, 00:12:34.571 "num_base_bdevs": 2, 00:12:34.571 "num_base_bdevs_discovered": 1, 00:12:34.571 "num_base_bdevs_operational": 1, 00:12:34.571 "base_bdevs_list": [ 00:12:34.571 { 00:12:34.571 "name": null, 00:12:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.571 "is_configured": false, 00:12:34.571 "data_offset": 0, 00:12:34.571 "data_size": 63488 00:12:34.571 }, 00:12:34.571 { 00:12:34.571 "name": "BaseBdev2", 00:12:34.571 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:34.571 "is_configured": true, 00:12:34.571 "data_offset": 2048, 00:12:34.571 "data_size": 63488 00:12:34.571 } 00:12:34.571 ] 00:12:34.571 }' 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.571 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.831 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.831 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.831 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.831 [2024-12-12 09:26:08.834391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.831 [2024-12-12 09:26:08.834516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.831 [2024-12-12 09:26:08.834543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:34.831 [2024-12-12 09:26:08.834556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.831 [2024-12-12 09:26:08.835144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.831 [2024-12-12 09:26:08.835169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.831 [2024-12-12 09:26:08.835265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:34.831 [2024-12-12 09:26:08.835282] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:34.831 [2024-12-12 09:26:08.835293] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:34.831 [2024-12-12 09:26:08.835323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.831 [2024-12-12 09:26:08.852127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:35.091 spare 00:12:35.091 09:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.091 09:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:35.091 [2024-12-12 09:26:08.854273] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.029 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.029 "name": "raid_bdev1", 00:12:36.029 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:36.029 "strip_size_kb": 0, 00:12:36.029 "state": "online", 00:12:36.029 "raid_level": "raid1", 00:12:36.029 "superblock": true, 00:12:36.029 "num_base_bdevs": 2, 00:12:36.029 "num_base_bdevs_discovered": 2, 00:12:36.029 "num_base_bdevs_operational": 2, 00:12:36.029 "process": { 00:12:36.029 "type": "rebuild", 00:12:36.029 "target": "spare", 00:12:36.029 "progress": { 00:12:36.029 "blocks": 20480, 00:12:36.029 "percent": 32 00:12:36.029 } 00:12:36.029 }, 00:12:36.029 "base_bdevs_list": [ 00:12:36.029 { 00:12:36.029 "name": "spare", 00:12:36.029 "uuid": "a84781e9-36c9-554a-92d7-3de1e1d3c357", 00:12:36.029 "is_configured": true, 00:12:36.029 "data_offset": 2048, 00:12:36.029 "data_size": 63488 00:12:36.029 }, 00:12:36.029 { 00:12:36.029 "name": "BaseBdev2", 00:12:36.029 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:36.030 "is_configured": true, 00:12:36.030 "data_offset": 2048, 00:12:36.030 "data_size": 63488 00:12:36.030 } 00:12:36.030 ] 00:12:36.030 }' 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.030 09:26:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.030 [2024-12-12 09:26:10.001517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.289 [2024-12-12 09:26:10.063107] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.289 [2024-12-12 09:26:10.063165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.289 [2024-12-12 09:26:10.063184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.289 [2024-12-12 09:26:10.063191] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.289 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.290 "name": "raid_bdev1", 00:12:36.290 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:36.290 "strip_size_kb": 0, 00:12:36.290 "state": "online", 00:12:36.290 "raid_level": "raid1", 00:12:36.290 "superblock": true, 00:12:36.290 "num_base_bdevs": 2, 00:12:36.290 "num_base_bdevs_discovered": 1, 00:12:36.290 "num_base_bdevs_operational": 1, 00:12:36.290 "base_bdevs_list": [ 00:12:36.290 { 00:12:36.290 "name": null, 00:12:36.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.290 "is_configured": false, 00:12:36.290 "data_offset": 0, 00:12:36.290 "data_size": 63488 00:12:36.290 }, 00:12:36.290 { 00:12:36.290 "name": "BaseBdev2", 00:12:36.290 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:36.290 "is_configured": true, 00:12:36.290 "data_offset": 2048, 00:12:36.290 "data_size": 63488 00:12:36.290 } 00:12:36.290 ] 00:12:36.290 }' 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.290 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.550 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.810 "name": "raid_bdev1", 00:12:36.810 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:36.810 "strip_size_kb": 0, 00:12:36.810 "state": "online", 00:12:36.810 "raid_level": "raid1", 00:12:36.810 "superblock": true, 00:12:36.810 "num_base_bdevs": 2, 00:12:36.810 "num_base_bdevs_discovered": 1, 00:12:36.810 "num_base_bdevs_operational": 1, 00:12:36.810 "base_bdevs_list": [ 00:12:36.810 { 00:12:36.810 "name": null, 00:12:36.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.810 "is_configured": false, 00:12:36.810 "data_offset": 0, 00:12:36.810 "data_size": 63488 00:12:36.810 }, 00:12:36.810 { 00:12:36.810 "name": "BaseBdev2", 00:12:36.810 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:36.810 "is_configured": true, 00:12:36.810 "data_offset": 2048, 00:12:36.810 "data_size": 63488 00:12:36.810 } 00:12:36.810 ] 00:12:36.810 }' 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 [2024-12-12 09:26:10.694708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.810 [2024-12-12 09:26:10.694772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.810 [2024-12-12 09:26:10.694797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:36.810 [2024-12-12 09:26:10.694816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.810 [2024-12-12 09:26:10.695340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.810 [2024-12-12 09:26:10.695358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.810 [2024-12-12 09:26:10.695445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:36.810 [2024-12-12 09:26:10.695459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.810 [2024-12-12 09:26:10.695471] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.810 [2024-12-12 09:26:10.695482] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:36.810 BaseBdev1 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 09:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.749 "name": "raid_bdev1", 00:12:37.749 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:37.749 "strip_size_kb": 0, 00:12:37.749 "state": "online", 00:12:37.749 "raid_level": "raid1", 00:12:37.749 "superblock": true, 00:12:37.749 "num_base_bdevs": 2, 00:12:37.749 "num_base_bdevs_discovered": 1, 00:12:37.749 "num_base_bdevs_operational": 1, 00:12:37.749 "base_bdevs_list": [ 00:12:37.749 { 00:12:37.749 "name": null, 00:12:37.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.749 "is_configured": false, 00:12:37.749 "data_offset": 0, 00:12:37.749 "data_size": 63488 00:12:37.749 }, 00:12:37.749 { 00:12:37.749 "name": "BaseBdev2", 00:12:37.749 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:37.749 "is_configured": true, 00:12:37.749 "data_offset": 2048, 00:12:37.749 "data_size": 63488 00:12:37.749 } 00:12:37.749 ] 00:12:37.749 }' 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.749 09:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.319 "name": "raid_bdev1", 00:12:38.319 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:38.319 "strip_size_kb": 0, 00:12:38.319 "state": "online", 00:12:38.319 "raid_level": "raid1", 00:12:38.319 "superblock": true, 00:12:38.319 "num_base_bdevs": 2, 00:12:38.319 "num_base_bdevs_discovered": 1, 00:12:38.319 "num_base_bdevs_operational": 1, 00:12:38.319 "base_bdevs_list": [ 00:12:38.319 { 00:12:38.319 "name": null, 00:12:38.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.319 "is_configured": false, 00:12:38.319 "data_offset": 0, 00:12:38.319 "data_size": 63488 00:12:38.319 }, 00:12:38.319 { 00:12:38.319 "name": "BaseBdev2", 00:12:38.319 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:38.319 "is_configured": true, 00:12:38.319 "data_offset": 2048, 00:12:38.319 "data_size": 63488 00:12:38.319 } 00:12:38.319 ] 00:12:38.319 }' 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.319 [2024-12-12 09:26:12.312086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.319 [2024-12-12 09:26:12.312365] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:38.319 [2024-12-12 09:26:12.312435] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.319 request: 00:12:38.319 { 00:12:38.319 "base_bdev": "BaseBdev1", 00:12:38.319 "raid_bdev": "raid_bdev1", 00:12:38.319 "method": "bdev_raid_add_base_bdev", 00:12:38.319 "req_id": 1 00:12:38.319 } 00:12:38.319 Got JSON-RPC error response 00:12:38.319 response: 00:12:38.319 { 00:12:38.319 "code": -22, 00:12:38.319 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:38.319 } 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.319 09:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.700 "name": "raid_bdev1", 00:12:39.700 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:39.700 "strip_size_kb": 0, 00:12:39.700 "state": "online", 00:12:39.700 "raid_level": "raid1", 00:12:39.700 "superblock": true, 00:12:39.700 "num_base_bdevs": 2, 00:12:39.700 "num_base_bdevs_discovered": 1, 00:12:39.700 "num_base_bdevs_operational": 1, 00:12:39.700 "base_bdevs_list": [ 00:12:39.700 { 00:12:39.700 "name": null, 00:12:39.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.700 "is_configured": false, 00:12:39.700 "data_offset": 0, 00:12:39.700 "data_size": 63488 00:12:39.700 }, 00:12:39.700 { 00:12:39.700 "name": "BaseBdev2", 00:12:39.700 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:39.700 "is_configured": true, 00:12:39.700 "data_offset": 2048, 00:12:39.700 "data_size": 63488 00:12:39.700 } 00:12:39.700 ] 00:12:39.700 }' 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.700 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.960 "name": "raid_bdev1", 00:12:39.960 "uuid": "fd4d1235-1fbe-4d01-8d10-f733aff1535e", 00:12:39.960 "strip_size_kb": 0, 00:12:39.960 "state": "online", 00:12:39.960 "raid_level": "raid1", 00:12:39.960 "superblock": true, 00:12:39.960 "num_base_bdevs": 2, 00:12:39.960 "num_base_bdevs_discovered": 1, 00:12:39.960 "num_base_bdevs_operational": 1, 00:12:39.960 "base_bdevs_list": [ 00:12:39.960 { 00:12:39.960 "name": null, 00:12:39.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.960 "is_configured": false, 00:12:39.960 "data_offset": 0, 00:12:39.960 "data_size": 63488 00:12:39.960 }, 00:12:39.960 { 00:12:39.960 "name": "BaseBdev2", 00:12:39.960 "uuid": "a9dd7210-a1d9-5512-8de5-7cb6cf2c1012", 00:12:39.960 "is_configured": true, 00:12:39.960 "data_offset": 2048, 00:12:39.960 "data_size": 63488 00:12:39.960 } 00:12:39.960 ] 00:12:39.960 }' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76863 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76863 ']' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76863 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76863 00:12:39.960 killing process with pid 76863 00:12:39.960 Received shutdown signal, test time was about 60.000000 seconds 00:12:39.960 00:12:39.960 Latency(us) 00:12:39.960 [2024-12-12T09:26:13.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.960 [2024-12-12T09:26:13.983Z] =================================================================================================================== 00:12:39.960 [2024-12-12T09:26:13.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76863' 00:12:39.960 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76863 00:12:39.961 [2024-12-12 09:26:13.953861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.961 [2024-12-12 09:26:13.954013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.961 09:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76863 00:12:39.961 [2024-12-12 09:26:13.954078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.961 [2024-12-12 09:26:13.954092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:40.550 [2024-12-12 09:26:14.273439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.538 00:12:41.538 real 0m23.377s 00:12:41.538 user 0m28.322s 00:12:41.538 sys 0m3.830s 00:12:41.538 ************************************ 00:12:41.538 END TEST raid_rebuild_test_sb 00:12:41.538 ************************************ 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.538 09:26:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:41.538 09:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:41.538 09:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.538 09:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.538 ************************************ 00:12:41.538 START TEST raid_rebuild_test_io 00:12:41.538 ************************************ 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77594 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77594 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77594 ']' 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.538 09:26:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.798 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.798 Zero copy mechanism will not be used. 00:12:41.798 [2024-12-12 09:26:15.632967] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:41.798 [2024-12-12 09:26:15.633110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77594 ] 00:12:41.798 [2024-12-12 09:26:15.813163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.058 [2024-12-12 09:26:15.941787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.318 [2024-12-12 09:26:16.154643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.318 [2024-12-12 09:26:16.154829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 BaseBdev1_malloc 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 [2024-12-12 09:26:16.498588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.578 [2024-12-12 09:26:16.498663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.578 [2024-12-12 09:26:16.498691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.578 [2024-12-12 09:26:16.498703] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.578 [2024-12-12 09:26:16.501164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.578 [2024-12-12 09:26:16.501216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.578 BaseBdev1 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 BaseBdev2_malloc 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 [2024-12-12 09:26:16.558445] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:42.578 [2024-12-12 09:26:16.558508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.578 [2024-12-12 09:26:16.558531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.578 [2024-12-12 09:26:16.558544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.578 [2024-12-12 09:26:16.560917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.578 [2024-12-12 09:26:16.561045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.578 BaseBdev2 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.839 spare_malloc 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.839 spare_delay 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.839 [2024-12-12 09:26:16.667759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.839 [2024-12-12 09:26:16.667818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.839 [2024-12-12 09:26:16.667838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:42.839 [2024-12-12 09:26:16.667850] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.839 [2024-12-12 09:26:16.670254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.839 [2024-12-12 09:26:16.670291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.839 spare 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.839 [2024-12-12 09:26:16.679805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.839 [2024-12-12 09:26:16.681913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.839 [2024-12-12 09:26:16.682017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.839 [2024-12-12 09:26:16.682031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:42.839 [2024-12-12 09:26:16.682274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.839 [2024-12-12 09:26:16.682465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.839 [2024-12-12 09:26:16.682482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.839 [2024-12-12 09:26:16.682630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.839 "name": "raid_bdev1", 00:12:42.839 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:42.839 "strip_size_kb": 0, 00:12:42.839 "state": "online", 00:12:42.839 "raid_level": "raid1", 00:12:42.839 "superblock": false, 00:12:42.839 "num_base_bdevs": 2, 00:12:42.839 "num_base_bdevs_discovered": 2, 00:12:42.839 "num_base_bdevs_operational": 2, 00:12:42.839 "base_bdevs_list": [ 00:12:42.839 { 00:12:42.839 "name": "BaseBdev1", 00:12:42.839 "uuid": "4ba0c2a5-4784-533b-83b5-16fc326e6c66", 00:12:42.839 "is_configured": true, 00:12:42.839 "data_offset": 0, 00:12:42.839 "data_size": 65536 00:12:42.839 }, 00:12:42.839 { 00:12:42.839 "name": "BaseBdev2", 00:12:42.839 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:42.839 "is_configured": true, 00:12:42.839 "data_offset": 0, 00:12:42.839 "data_size": 65536 00:12:42.839 } 00:12:42.839 ] 00:12:42.839 }' 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.839 09:26:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.099 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.099 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.099 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.099 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:43.099 [2024-12-12 09:26:17.095318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.099 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.359 [2024-12-12 09:26:17.194866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.359 "name": "raid_bdev1", 00:12:43.359 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:43.359 "strip_size_kb": 0, 00:12:43.359 "state": "online", 00:12:43.359 "raid_level": "raid1", 00:12:43.359 "superblock": false, 00:12:43.359 "num_base_bdevs": 2, 00:12:43.359 "num_base_bdevs_discovered": 1, 00:12:43.359 "num_base_bdevs_operational": 1, 00:12:43.359 "base_bdevs_list": [ 00:12:43.359 { 00:12:43.359 "name": null, 00:12:43.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.359 "is_configured": false, 00:12:43.359 "data_offset": 0, 00:12:43.359 "data_size": 65536 00:12:43.359 }, 00:12:43.359 { 00:12:43.359 "name": "BaseBdev2", 00:12:43.359 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:43.359 "is_configured": true, 00:12:43.359 "data_offset": 0, 00:12:43.359 "data_size": 65536 00:12:43.359 } 00:12:43.359 ] 00:12:43.359 }' 00:12:43.359 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.360 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 [2024-12-12 09:26:17.292109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:43.360 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.360 Zero copy mechanism will not be used. 00:12:43.360 Running I/O for 60 seconds... 00:12:43.619 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.619 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.619 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.880 [2024-12-12 09:26:17.644249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.880 09:26:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.880 09:26:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:43.880 [2024-12-12 09:26:17.684971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:43.880 [2024-12-12 09:26:17.687245] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.880 [2024-12-12 09:26:17.793262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.880 [2024-12-12 09:26:17.793954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.140 [2024-12-12 09:26:18.008385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.140 [2024-12-12 09:26:18.008913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.400 156.00 IOPS, 468.00 MiB/s [2024-12-12T09:26:18.423Z] [2024-12-12 09:26:18.354667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:44.400 [2024-12-12 09:26:18.355671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:44.660 [2024-12-12 09:26:18.573712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.920 "name": "raid_bdev1", 00:12:44.920 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:44.920 "strip_size_kb": 0, 00:12:44.920 "state": "online", 00:12:44.920 "raid_level": "raid1", 00:12:44.920 "superblock": false, 00:12:44.920 "num_base_bdevs": 2, 00:12:44.920 "num_base_bdevs_discovered": 2, 00:12:44.920 "num_base_bdevs_operational": 2, 00:12:44.920 "process": { 00:12:44.920 "type": "rebuild", 00:12:44.920 "target": "spare", 00:12:44.920 "progress": { 00:12:44.920 "blocks": 10240, 00:12:44.920 "percent": 15 00:12:44.920 } 00:12:44.920 }, 00:12:44.920 "base_bdevs_list": [ 00:12:44.920 { 00:12:44.920 "name": "spare", 00:12:44.920 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:44.920 "is_configured": true, 00:12:44.920 "data_offset": 0, 00:12:44.920 "data_size": 65536 00:12:44.920 }, 00:12:44.920 { 00:12:44.920 "name": "BaseBdev2", 00:12:44.920 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:44.920 "is_configured": true, 00:12:44.920 "data_offset": 0, 00:12:44.920 "data_size": 65536 00:12:44.920 } 00:12:44.920 ] 00:12:44.920 }' 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.920 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.920 [2024-12-12 09:26:18.817330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.920 [2024-12-12 09:26:18.818119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:44.920 [2024-12-12 09:26:18.930365] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.920 [2024-12-12 09:26:18.939086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.920 [2024-12-12 09:26:18.939175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.920 [2024-12-12 09:26:18.939206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.180 [2024-12-12 09:26:18.977325] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.180 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.181 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.181 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.181 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.181 09:26:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.181 "name": "raid_bdev1", 00:12:45.181 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:45.181 "strip_size_kb": 0, 00:12:45.181 "state": "online", 00:12:45.181 "raid_level": "raid1", 00:12:45.181 "superblock": false, 00:12:45.181 "num_base_bdevs": 2, 00:12:45.181 "num_base_bdevs_discovered": 1, 00:12:45.181 "num_base_bdevs_operational": 1, 00:12:45.181 "base_bdevs_list": [ 00:12:45.181 { 00:12:45.181 "name": null, 00:12:45.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.181 "is_configured": false, 00:12:45.181 "data_offset": 0, 00:12:45.181 "data_size": 65536 00:12:45.181 }, 00:12:45.181 { 00:12:45.181 "name": "BaseBdev2", 00:12:45.181 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:45.181 "is_configured": true, 00:12:45.181 "data_offset": 0, 00:12:45.181 "data_size": 65536 00:12:45.181 } 00:12:45.181 ] 00:12:45.181 }' 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.181 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.441 145.00 IOPS, 435.00 MiB/s [2024-12-12T09:26:19.464Z] 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.441 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.701 "name": "raid_bdev1", 00:12:45.701 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:45.701 "strip_size_kb": 0, 00:12:45.701 "state": "online", 00:12:45.701 "raid_level": "raid1", 00:12:45.701 "superblock": false, 00:12:45.701 "num_base_bdevs": 2, 00:12:45.701 "num_base_bdevs_discovered": 1, 00:12:45.701 "num_base_bdevs_operational": 1, 00:12:45.701 "base_bdevs_list": [ 00:12:45.701 { 00:12:45.701 "name": null, 00:12:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.701 "is_configured": false, 00:12:45.701 "data_offset": 0, 00:12:45.701 "data_size": 65536 00:12:45.701 }, 00:12:45.701 { 00:12:45.701 "name": "BaseBdev2", 00:12:45.701 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:45.701 "is_configured": true, 00:12:45.701 "data_offset": 0, 00:12:45.701 "data_size": 65536 00:12:45.701 } 00:12:45.701 ] 00:12:45.701 }' 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.701 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.702 [2024-12-12 09:26:19.613138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.702 09:26:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.702 09:26:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:45.702 [2024-12-12 09:26:19.672152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:45.702 [2024-12-12 09:26:19.674446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.961 [2024-12-12 09:26:19.781945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:45.961 [2024-12-12 09:26:19.782559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.221 [2024-12-12 09:26:19.996267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.221 [2024-12-12 09:26:19.996711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.481 156.00 IOPS, 468.00 MiB/s [2024-12-12T09:26:20.504Z] [2024-12-12 09:26:20.320074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:46.740 [2024-12-12 09:26:20.535911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.740 [2024-12-12 09:26:20.536483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.740 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.741 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.741 "name": "raid_bdev1", 00:12:46.741 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:46.741 "strip_size_kb": 0, 00:12:46.741 "state": "online", 00:12:46.741 "raid_level": "raid1", 00:12:46.741 "superblock": false, 00:12:46.741 "num_base_bdevs": 2, 00:12:46.741 "num_base_bdevs_discovered": 2, 00:12:46.741 "num_base_bdevs_operational": 2, 00:12:46.741 "process": { 00:12:46.741 "type": "rebuild", 00:12:46.741 "target": "spare", 00:12:46.741 "progress": { 00:12:46.741 "blocks": 10240, 00:12:46.741 "percent": 15 00:12:46.741 } 00:12:46.741 }, 00:12:46.741 "base_bdevs_list": [ 00:12:46.741 { 00:12:46.741 "name": "spare", 00:12:46.741 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:46.741 "is_configured": true, 00:12:46.741 "data_offset": 0, 00:12:46.741 "data_size": 65536 00:12:46.741 }, 00:12:46.741 { 00:12:46.741 "name": "BaseBdev2", 00:12:46.741 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:46.741 "is_configured": true, 00:12:46.741 "data_offset": 0, 00:12:46.741 "data_size": 65536 00:12:46.741 } 00:12:46.741 ] 00:12:46.741 }' 00:12:46.741 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.741 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.741 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.000 "name": "raid_bdev1", 00:12:47.000 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:47.000 "strip_size_kb": 0, 00:12:47.000 "state": "online", 00:12:47.000 "raid_level": "raid1", 00:12:47.000 "superblock": false, 00:12:47.000 "num_base_bdevs": 2, 00:12:47.000 "num_base_bdevs_discovered": 2, 00:12:47.000 "num_base_bdevs_operational": 2, 00:12:47.000 "process": { 00:12:47.000 "type": "rebuild", 00:12:47.000 "target": "spare", 00:12:47.000 "progress": { 00:12:47.000 "blocks": 12288, 00:12:47.000 "percent": 18 00:12:47.000 } 00:12:47.000 }, 00:12:47.000 "base_bdevs_list": [ 00:12:47.000 { 00:12:47.000 "name": "spare", 00:12:47.000 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:47.000 "is_configured": true, 00:12:47.000 "data_offset": 0, 00:12:47.000 "data_size": 65536 00:12:47.000 }, 00:12:47.000 { 00:12:47.000 "name": "BaseBdev2", 00:12:47.000 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:47.000 "is_configured": true, 00:12:47.000 "data_offset": 0, 00:12:47.000 "data_size": 65536 00:12:47.000 } 00:12:47.000 ] 00:12:47.000 }' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.000 [2024-12-12 09:26:20.854812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.000 09:26:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.000 [2024-12-12 09:26:20.979427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:47.568 136.50 IOPS, 409.50 MiB/s [2024-12-12T09:26:21.591Z] [2024-12-12 09:26:21.454507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.138 09:26:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.138 "name": "raid_bdev1", 00:12:48.138 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:48.138 "strip_size_kb": 0, 00:12:48.138 "state": "online", 00:12:48.138 "raid_level": "raid1", 00:12:48.138 "superblock": false, 00:12:48.138 "num_base_bdevs": 2, 00:12:48.138 "num_base_bdevs_discovered": 2, 00:12:48.138 "num_base_bdevs_operational": 2, 00:12:48.138 "process": { 00:12:48.138 "type": "rebuild", 00:12:48.138 "target": "spare", 00:12:48.138 "progress": { 00:12:48.138 "blocks": 28672, 00:12:48.138 "percent": 43 00:12:48.138 } 00:12:48.138 }, 00:12:48.138 "base_bdevs_list": [ 00:12:48.138 { 00:12:48.138 "name": "spare", 00:12:48.138 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:48.138 "is_configured": true, 00:12:48.138 "data_offset": 0, 00:12:48.138 "data_size": 65536 00:12:48.138 }, 00:12:48.138 { 00:12:48.138 "name": "BaseBdev2", 00:12:48.138 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:48.138 "is_configured": true, 00:12:48.138 "data_offset": 0, 00:12:48.138 "data_size": 65536 00:12:48.138 } 00:12:48.138 ] 00:12:48.138 }' 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.138 09:26:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.656 120.40 IOPS, 361.20 MiB/s [2024-12-12T09:26:22.679Z] [2024-12-12 09:26:22.531977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:48.916 [2024-12-12 09:26:22.879850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:49.176 [2024-12-12 09:26:23.093791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:49.176 [2024-12-12 09:26:23.094230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.176 "name": "raid_bdev1", 00:12:49.176 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:49.176 "strip_size_kb": 0, 00:12:49.176 "state": "online", 00:12:49.176 "raid_level": "raid1", 00:12:49.176 "superblock": false, 00:12:49.176 "num_base_bdevs": 2, 00:12:49.176 "num_base_bdevs_discovered": 2, 00:12:49.176 "num_base_bdevs_operational": 2, 00:12:49.176 "process": { 00:12:49.176 "type": "rebuild", 00:12:49.176 "target": "spare", 00:12:49.176 "progress": { 00:12:49.176 "blocks": 47104, 00:12:49.176 "percent": 71 00:12:49.176 } 00:12:49.176 }, 00:12:49.176 "base_bdevs_list": [ 00:12:49.176 { 00:12:49.176 "name": "spare", 00:12:49.176 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:49.176 "is_configured": true, 00:12:49.176 "data_offset": 0, 00:12:49.176 "data_size": 65536 00:12:49.176 }, 00:12:49.176 { 00:12:49.176 "name": "BaseBdev2", 00:12:49.176 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:49.176 "is_configured": true, 00:12:49.176 "data_offset": 0, 00:12:49.176 "data_size": 65536 00:12:49.176 } 00:12:49.176 ] 00:12:49.176 }' 00:12:49.176 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.436 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.436 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.436 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.436 09:26:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.436 106.50 IOPS, 319.50 MiB/s [2024-12-12T09:26:23.459Z] [2024-12-12 09:26:23.429216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:49.695 [2024-12-12 09:26:23.652589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:50.263 [2024-12-12 09:26:23.979080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.263 09:26:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.523 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.523 "name": "raid_bdev1", 00:12:50.523 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:50.523 "strip_size_kb": 0, 00:12:50.523 "state": "online", 00:12:50.523 "raid_level": "raid1", 00:12:50.523 "superblock": false, 00:12:50.523 "num_base_bdevs": 2, 00:12:50.523 "num_base_bdevs_discovered": 2, 00:12:50.523 "num_base_bdevs_operational": 2, 00:12:50.523 "process": { 00:12:50.523 "type": "rebuild", 00:12:50.523 "target": "spare", 00:12:50.523 "progress": { 00:12:50.523 "blocks": 61440, 00:12:50.523 "percent": 93 00:12:50.523 } 00:12:50.523 }, 00:12:50.523 "base_bdevs_list": [ 00:12:50.523 { 00:12:50.523 "name": "spare", 00:12:50.523 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:50.523 "is_configured": true, 00:12:50.523 "data_offset": 0, 00:12:50.523 "data_size": 65536 00:12:50.523 }, 00:12:50.523 { 00:12:50.523 "name": "BaseBdev2", 00:12:50.523 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:50.523 "is_configured": true, 00:12:50.523 "data_offset": 0, 00:12:50.523 "data_size": 65536 00:12:50.523 } 00:12:50.523 ] 00:12:50.523 }' 00:12:50.523 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.523 95.57 IOPS, 286.71 MiB/s [2024-12-12T09:26:24.546Z] 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.523 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.523 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.523 09:26:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.523 [2024-12-12 09:26:24.408633] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.523 [2024-12-12 09:26:24.508406] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.523 [2024-12-12 09:26:24.511370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.462 87.50 IOPS, 262.50 MiB/s [2024-12-12T09:26:25.485Z] 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.462 "name": "raid_bdev1", 00:12:51.462 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:51.462 "strip_size_kb": 0, 00:12:51.462 "state": "online", 00:12:51.462 "raid_level": "raid1", 00:12:51.462 "superblock": false, 00:12:51.462 "num_base_bdevs": 2, 00:12:51.462 "num_base_bdevs_discovered": 2, 00:12:51.462 "num_base_bdevs_operational": 2, 00:12:51.462 "base_bdevs_list": [ 00:12:51.462 { 00:12:51.462 "name": "spare", 00:12:51.462 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:51.462 "is_configured": true, 00:12:51.462 "data_offset": 0, 00:12:51.462 "data_size": 65536 00:12:51.462 }, 00:12:51.462 { 00:12:51.462 "name": "BaseBdev2", 00:12:51.462 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:51.462 "is_configured": true, 00:12:51.462 "data_offset": 0, 00:12:51.462 "data_size": 65536 00:12:51.462 } 00:12:51.462 ] 00:12:51.462 }' 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:51.462 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.733 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:51.733 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:51.733 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.733 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.734 "name": "raid_bdev1", 00:12:51.734 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:51.734 "strip_size_kb": 0, 00:12:51.734 "state": "online", 00:12:51.734 "raid_level": "raid1", 00:12:51.734 "superblock": false, 00:12:51.734 "num_base_bdevs": 2, 00:12:51.734 "num_base_bdevs_discovered": 2, 00:12:51.734 "num_base_bdevs_operational": 2, 00:12:51.734 "base_bdevs_list": [ 00:12:51.734 { 00:12:51.734 "name": "spare", 00:12:51.734 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:51.734 "is_configured": true, 00:12:51.734 "data_offset": 0, 00:12:51.734 "data_size": 65536 00:12:51.734 }, 00:12:51.734 { 00:12:51.734 "name": "BaseBdev2", 00:12:51.734 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:51.734 "is_configured": true, 00:12:51.734 "data_offset": 0, 00:12:51.734 "data_size": 65536 00:12:51.734 } 00:12:51.734 ] 00:12:51.734 }' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.734 "name": "raid_bdev1", 00:12:51.734 "uuid": "59eca399-abba-4fc5-be17-8652f943adeb", 00:12:51.734 "strip_size_kb": 0, 00:12:51.734 "state": "online", 00:12:51.734 "raid_level": "raid1", 00:12:51.734 "superblock": false, 00:12:51.734 "num_base_bdevs": 2, 00:12:51.734 "num_base_bdevs_discovered": 2, 00:12:51.734 "num_base_bdevs_operational": 2, 00:12:51.734 "base_bdevs_list": [ 00:12:51.734 { 00:12:51.734 "name": "spare", 00:12:51.734 "uuid": "f714b4b4-07cc-532e-a6f8-10b0200c5200", 00:12:51.734 "is_configured": true, 00:12:51.734 "data_offset": 0, 00:12:51.734 "data_size": 65536 00:12:51.734 }, 00:12:51.734 { 00:12:51.734 "name": "BaseBdev2", 00:12:51.734 "uuid": "5b5e44a6-d4a7-5bd9-8f32-54619f408386", 00:12:51.734 "is_configured": true, 00:12:51.734 "data_offset": 0, 00:12:51.734 "data_size": 65536 00:12:51.734 } 00:12:51.734 ] 00:12:51.734 }' 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.734 09:26:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.318 [2024-12-12 09:26:26.139922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.318 [2024-12-12 09:26:26.139977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.318 00:12:52.318 Latency(us) 00:12:52.318 [2024-12-12T09:26:26.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.318 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:52.318 raid_bdev1 : 8.95 81.93 245.80 0.00 0.00 16682.79 284.39 113099.68 00:12:52.318 [2024-12-12T09:26:26.341Z] =================================================================================================================== 00:12:52.318 [2024-12-12T09:26:26.341Z] Total : 81.93 245.80 0.00 0.00 16682.79 284.39 113099.68 00:12:52.318 [2024-12-12 09:26:26.245911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.318 [2024-12-12 09:26:26.246066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.318 [2024-12-12 09:26:26.246182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.318 [2024-12-12 09:26:26.246231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:52.318 { 00:12:52.318 "results": [ 00:12:52.318 { 00:12:52.318 "job": "raid_bdev1", 00:12:52.318 "core_mask": "0x1", 00:12:52.318 "workload": "randrw", 00:12:52.318 "percentage": 50, 00:12:52.318 "status": "finished", 00:12:52.318 "queue_depth": 2, 00:12:52.318 "io_size": 3145728, 00:12:52.318 "runtime": 8.946471, 00:12:52.318 "iops": 81.93174716600545, 00:12:52.318 "mibps": 245.79524149801637, 00:12:52.318 "io_failed": 0, 00:12:52.318 "io_timeout": 0, 00:12:52.318 "avg_latency_us": 16682.79055624728, 00:12:52.318 "min_latency_us": 284.3947598253275, 00:12:52.318 "max_latency_us": 113099.68209606987 00:12:52.318 } 00:12:52.318 ], 00:12:52.318 "core_count": 1 00:12:52.318 } 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.318 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:52.578 /dev/nbd0 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.579 1+0 records in 00:12:52.579 1+0 records out 00:12:52.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515403 s, 7.9 MB/s 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.579 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:52.839 /dev/nbd1 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.839 1+0 records in 00:12:52.839 1+0 records out 00:12:52.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682913 s, 6.0 MB/s 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.839 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.099 09:26:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:53.358 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.359 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77594 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77594 ']' 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77594 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77594 00:12:53.618 killing process with pid 77594 00:12:53.618 Received shutdown signal, test time was about 10.201343 seconds 00:12:53.618 00:12:53.618 Latency(us) 00:12:53.618 [2024-12-12T09:26:27.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.618 [2024-12-12T09:26:27.641Z] =================================================================================================================== 00:12:53.618 [2024-12-12T09:26:27.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77594' 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77594 00:12:53.618 [2024-12-12 09:26:27.476176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.618 09:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77594 00:12:53.876 [2024-12-12 09:26:27.713847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.256 09:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:55.256 00:12:55.256 real 0m13.421s 00:12:55.256 user 0m16.540s 00:12:55.256 sys 0m1.644s 00:12:55.256 09:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.256 09:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.256 ************************************ 00:12:55.256 END TEST raid_rebuild_test_io 00:12:55.256 ************************************ 00:12:55.256 09:26:29 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:55.256 09:26:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:55.256 09:26:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.256 09:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.256 ************************************ 00:12:55.256 START TEST raid_rebuild_test_sb_io 00:12:55.256 ************************************ 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77989 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77989 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77989 ']' 00:12:55.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.256 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.256 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.256 Zero copy mechanism will not be used. 00:12:55.256 [2024-12-12 09:26:29.132210] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:55.256 [2024-12-12 09:26:29.132330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77989 ] 00:12:55.516 [2024-12-12 09:26:29.308818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.516 [2024-12-12 09:26:29.438728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.776 [2024-12-12 09:26:29.667933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.776 [2024-12-12 09:26:29.667968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.036 09:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.036 BaseBdev1_malloc 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.036 [2024-12-12 09:26:30.046383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.036 [2024-12-12 09:26:30.046509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.036 [2024-12-12 09:26:30.046540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.036 [2024-12-12 09:26:30.046551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.036 [2024-12-12 09:26:30.049018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.036 [2024-12-12 09:26:30.049058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.036 BaseBdev1 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.036 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 BaseBdev2_malloc 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 [2024-12-12 09:26:30.106994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:56.296 [2024-12-12 09:26:30.107055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.296 [2024-12-12 09:26:30.107077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.296 [2024-12-12 09:26:30.107089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.296 [2024-12-12 09:26:30.109477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.296 [2024-12-12 09:26:30.109516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.296 BaseBdev2 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 spare_malloc 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 spare_delay 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 [2024-12-12 09:26:30.214490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:56.296 [2024-12-12 09:26:30.214548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.296 [2024-12-12 09:26:30.214567] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:56.296 [2024-12-12 09:26:30.214578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.296 [2024-12-12 09:26:30.216964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.296 [2024-12-12 09:26:30.217010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:56.296 spare 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.296 [2024-12-12 09:26:30.226530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.296 [2024-12-12 09:26:30.228583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.296 [2024-12-12 09:26:30.228830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:56.296 [2024-12-12 09:26:30.228851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.296 [2024-12-12 09:26:30.229114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.296 [2024-12-12 09:26:30.229296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:56.296 [2024-12-12 09:26:30.229315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:56.296 [2024-12-12 09:26:30.229450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.296 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.297 "name": "raid_bdev1", 00:12:56.297 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:12:56.297 "strip_size_kb": 0, 00:12:56.297 "state": "online", 00:12:56.297 "raid_level": "raid1", 00:12:56.297 "superblock": true, 00:12:56.297 "num_base_bdevs": 2, 00:12:56.297 "num_base_bdevs_discovered": 2, 00:12:56.297 "num_base_bdevs_operational": 2, 00:12:56.297 "base_bdevs_list": [ 00:12:56.297 { 00:12:56.297 "name": "BaseBdev1", 00:12:56.297 "uuid": "35569502-0e93-50d7-b443-3f3c7d8c26ac", 00:12:56.297 "is_configured": true, 00:12:56.297 "data_offset": 2048, 00:12:56.297 "data_size": 63488 00:12:56.297 }, 00:12:56.297 { 00:12:56.297 "name": "BaseBdev2", 00:12:56.297 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:12:56.297 "is_configured": true, 00:12:56.297 "data_offset": 2048, 00:12:56.297 "data_size": 63488 00:12:56.297 } 00:12:56.297 ] 00:12:56.297 }' 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.297 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:56.866 [2024-12-12 09:26:30.662031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.866 [2024-12-12 09:26:30.757591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.866 "name": "raid_bdev1", 00:12:56.866 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:12:56.866 "strip_size_kb": 0, 00:12:56.866 "state": "online", 00:12:56.866 "raid_level": "raid1", 00:12:56.866 "superblock": true, 00:12:56.866 "num_base_bdevs": 2, 00:12:56.866 "num_base_bdevs_discovered": 1, 00:12:56.866 "num_base_bdevs_operational": 1, 00:12:56.866 "base_bdevs_list": [ 00:12:56.866 { 00:12:56.866 "name": null, 00:12:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.866 "is_configured": false, 00:12:56.866 "data_offset": 0, 00:12:56.866 "data_size": 63488 00:12:56.866 }, 00:12:56.866 { 00:12:56.866 "name": "BaseBdev2", 00:12:56.866 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:12:56.866 "is_configured": true, 00:12:56.866 "data_offset": 2048, 00:12:56.866 "data_size": 63488 00:12:56.866 } 00:12:56.866 ] 00:12:56.866 }' 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.866 09:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 [2024-12-12 09:26:30.855015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:56.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:56.866 Zero copy mechanism will not be used. 00:12:56.866 Running I/O for 60 seconds... 00:12:57.435 09:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.435 09:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.435 09:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 [2024-12-12 09:26:31.192014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.435 09:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.435 09:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:57.435 [2024-12-12 09:26:31.254383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:57.435 [2024-12-12 09:26:31.256577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.435 [2024-12-12 09:26:31.393212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.694 [2024-12-12 09:26:31.614008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.694 [2024-12-12 09:26:31.614547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.213 203.00 IOPS, 609.00 MiB/s [2024-12-12T09:26:32.236Z] [2024-12-12 09:26:32.067802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.213 [2024-12-12 09:26:32.068219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.472 "name": "raid_bdev1", 00:12:58.472 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:12:58.472 "strip_size_kb": 0, 00:12:58.472 "state": "online", 00:12:58.472 "raid_level": "raid1", 00:12:58.472 "superblock": true, 00:12:58.472 "num_base_bdevs": 2, 00:12:58.472 "num_base_bdevs_discovered": 2, 00:12:58.472 "num_base_bdevs_operational": 2, 00:12:58.472 "process": { 00:12:58.472 "type": "rebuild", 00:12:58.472 "target": "spare", 00:12:58.472 "progress": { 00:12:58.472 "blocks": 10240, 00:12:58.472 "percent": 16 00:12:58.472 } 00:12:58.472 }, 00:12:58.472 "base_bdevs_list": [ 00:12:58.472 { 00:12:58.472 "name": "spare", 00:12:58.472 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:12:58.472 "is_configured": true, 00:12:58.472 "data_offset": 2048, 00:12:58.472 "data_size": 63488 00:12:58.472 }, 00:12:58.472 { 00:12:58.472 "name": "BaseBdev2", 00:12:58.472 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:12:58.472 "is_configured": true, 00:12:58.472 "data_offset": 2048, 00:12:58.472 "data_size": 63488 00:12:58.472 } 00:12:58.472 ] 00:12:58.472 }' 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.472 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.472 [2024-12-12 09:26:32.404674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.472 [2024-12-12 09:26:32.404824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:58.731 [2024-12-12 09:26:32.514767] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:58.731 [2024-12-12 09:26:32.522912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.731 [2024-12-12 09:26:32.523040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.731 [2024-12-12 09:26:32.523076] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:58.731 [2024-12-12 09:26:32.555609] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.731 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.731 "name": "raid_bdev1", 00:12:58.731 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:12:58.731 "strip_size_kb": 0, 00:12:58.731 "state": "online", 00:12:58.731 "raid_level": "raid1", 00:12:58.731 "superblock": true, 00:12:58.731 "num_base_bdevs": 2, 00:12:58.731 "num_base_bdevs_discovered": 1, 00:12:58.731 "num_base_bdevs_operational": 1, 00:12:58.731 "base_bdevs_list": [ 00:12:58.731 { 00:12:58.732 "name": null, 00:12:58.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.732 "is_configured": false, 00:12:58.732 "data_offset": 0, 00:12:58.732 "data_size": 63488 00:12:58.732 }, 00:12:58.732 { 00:12:58.732 "name": "BaseBdev2", 00:12:58.732 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:12:58.732 "is_configured": true, 00:12:58.732 "data_offset": 2048, 00:12:58.732 "data_size": 63488 00:12:58.732 } 00:12:58.732 ] 00:12:58.732 }' 00:12:58.732 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.732 09:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.250 174.50 IOPS, 523.50 MiB/s [2024-12-12T09:26:33.273Z] 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.250 "name": "raid_bdev1", 00:12:59.250 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:12:59.250 "strip_size_kb": 0, 00:12:59.250 "state": "online", 00:12:59.250 "raid_level": "raid1", 00:12:59.250 "superblock": true, 00:12:59.250 "num_base_bdevs": 2, 00:12:59.250 "num_base_bdevs_discovered": 1, 00:12:59.250 "num_base_bdevs_operational": 1, 00:12:59.250 "base_bdevs_list": [ 00:12:59.250 { 00:12:59.250 "name": null, 00:12:59.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.250 "is_configured": false, 00:12:59.250 "data_offset": 0, 00:12:59.250 "data_size": 63488 00:12:59.250 }, 00:12:59.250 { 00:12:59.250 "name": "BaseBdev2", 00:12:59.250 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:12:59.250 "is_configured": true, 00:12:59.250 "data_offset": 2048, 00:12:59.250 "data_size": 63488 00:12:59.250 } 00:12:59.250 ] 00:12:59.250 }' 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.250 [2024-12-12 09:26:33.198834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.250 09:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:59.250 [2024-12-12 09:26:33.256139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:59.250 [2024-12-12 09:26:33.258422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.510 [2024-12-12 09:26:33.376863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.510 [2024-12-12 09:26:33.377852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.769 [2024-12-12 09:26:33.598431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:59.769 [2024-12-12 09:26:33.598963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.028 179.67 IOPS, 539.00 MiB/s [2024-12-12T09:26:34.051Z] [2024-12-12 09:26:33.936644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.287 [2024-12-12 09:26:34.070071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.287 [2024-12-12 09:26:34.070653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.287 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.288 "name": "raid_bdev1", 00:13:00.288 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:00.288 "strip_size_kb": 0, 00:13:00.288 "state": "online", 00:13:00.288 "raid_level": "raid1", 00:13:00.288 "superblock": true, 00:13:00.288 "num_base_bdevs": 2, 00:13:00.288 "num_base_bdevs_discovered": 2, 00:13:00.288 "num_base_bdevs_operational": 2, 00:13:00.288 "process": { 00:13:00.288 "type": "rebuild", 00:13:00.288 "target": "spare", 00:13:00.288 "progress": { 00:13:00.288 "blocks": 10240, 00:13:00.288 "percent": 16 00:13:00.288 } 00:13:00.288 }, 00:13:00.288 "base_bdevs_list": [ 00:13:00.288 { 00:13:00.288 "name": "spare", 00:13:00.288 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:00.288 "is_configured": true, 00:13:00.288 "data_offset": 2048, 00:13:00.288 "data_size": 63488 00:13:00.288 }, 00:13:00.288 { 00:13:00.288 "name": "BaseBdev2", 00:13:00.288 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:00.288 "is_configured": true, 00:13:00.288 "data_offset": 2048, 00:13:00.288 "data_size": 63488 00:13:00.288 } 00:13:00.288 ] 00:13:00.288 }' 00:13:00.288 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:00.547 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=420 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.547 "name": "raid_bdev1", 00:13:00.547 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:00.547 "strip_size_kb": 0, 00:13:00.547 "state": "online", 00:13:00.547 "raid_level": "raid1", 00:13:00.547 "superblock": true, 00:13:00.547 "num_base_bdevs": 2, 00:13:00.547 "num_base_bdevs_discovered": 2, 00:13:00.547 "num_base_bdevs_operational": 2, 00:13:00.547 "process": { 00:13:00.547 "type": "rebuild", 00:13:00.547 "target": "spare", 00:13:00.547 "progress": { 00:13:00.547 "blocks": 12288, 00:13:00.547 "percent": 19 00:13:00.547 } 00:13:00.547 }, 00:13:00.547 "base_bdevs_list": [ 00:13:00.547 { 00:13:00.547 "name": "spare", 00:13:00.547 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:00.547 "is_configured": true, 00:13:00.547 "data_offset": 2048, 00:13:00.547 "data_size": 63488 00:13:00.547 }, 00:13:00.547 { 00:13:00.547 "name": "BaseBdev2", 00:13:00.547 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:00.547 "is_configured": true, 00:13:00.547 "data_offset": 2048, 00:13:00.547 "data_size": 63488 00:13:00.547 } 00:13:00.547 ] 00:13:00.547 }' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.547 09:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.547 [2024-12-12 09:26:34.521719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:00.547 [2024-12-12 09:26:34.522284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:01.115 [2024-12-12 09:26:34.841598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:01.115 150.75 IOPS, 452.25 MiB/s [2024-12-12T09:26:35.138Z] [2024-12-12 09:26:34.951998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:01.374 [2024-12-12 09:26:35.308399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.633 "name": "raid_bdev1", 00:13:01.633 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:01.633 "strip_size_kb": 0, 00:13:01.633 "state": "online", 00:13:01.633 "raid_level": "raid1", 00:13:01.633 "superblock": true, 00:13:01.633 "num_base_bdevs": 2, 00:13:01.633 "num_base_bdevs_discovered": 2, 00:13:01.633 "num_base_bdevs_operational": 2, 00:13:01.633 "process": { 00:13:01.633 "type": "rebuild", 00:13:01.633 "target": "spare", 00:13:01.633 "progress": { 00:13:01.633 "blocks": 28672, 00:13:01.633 "percent": 45 00:13:01.633 } 00:13:01.633 }, 00:13:01.633 "base_bdevs_list": [ 00:13:01.633 { 00:13:01.633 "name": "spare", 00:13:01.633 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:01.633 "is_configured": true, 00:13:01.633 "data_offset": 2048, 00:13:01.633 "data_size": 63488 00:13:01.633 }, 00:13:01.633 { 00:13:01.633 "name": "BaseBdev2", 00:13:01.633 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:01.633 "is_configured": true, 00:13:01.633 "data_offset": 2048, 00:13:01.633 "data_size": 63488 00:13:01.633 } 00:13:01.633 ] 00:13:01.633 }' 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.633 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.893 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.893 09:26:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.153 132.40 IOPS, 397.20 MiB/s [2024-12-12T09:26:36.176Z] [2024-12-12 09:26:36.121821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:02.153 [2024-12-12 09:26:36.122281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:02.722 [2024-12-12 09:26:36.449758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.722 "name": "raid_bdev1", 00:13:02.722 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:02.722 "strip_size_kb": 0, 00:13:02.722 "state": "online", 00:13:02.722 "raid_level": "raid1", 00:13:02.722 "superblock": true, 00:13:02.722 "num_base_bdevs": 2, 00:13:02.722 "num_base_bdevs_discovered": 2, 00:13:02.722 "num_base_bdevs_operational": 2, 00:13:02.722 "process": { 00:13:02.722 "type": "rebuild", 00:13:02.722 "target": "spare", 00:13:02.722 "progress": { 00:13:02.722 "blocks": 47104, 00:13:02.722 "percent": 74 00:13:02.722 } 00:13:02.722 }, 00:13:02.722 "base_bdevs_list": [ 00:13:02.722 { 00:13:02.722 "name": "spare", 00:13:02.722 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:02.722 "is_configured": true, 00:13:02.722 "data_offset": 2048, 00:13:02.722 "data_size": 63488 00:13:02.722 }, 00:13:02.722 { 00:13:02.722 "name": "BaseBdev2", 00:13:02.722 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:02.722 "is_configured": true, 00:13:02.722 "data_offset": 2048, 00:13:02.722 "data_size": 63488 00:13:02.722 } 00:13:02.722 ] 00:13:02.722 }' 00:13:02.723 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.992 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.992 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.992 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.992 09:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.272 118.83 IOPS, 356.50 MiB/s [2024-12-12T09:26:37.295Z] [2024-12-12 09:26:37.246055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:03.858 [2024-12-12 09:26:37.589884] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:03.858 [2024-12-12 09:26:37.689754] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:03.858 [2024-12-12 09:26:37.692006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.858 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.858 107.71 IOPS, 323.14 MiB/s [2024-12-12T09:26:37.881Z] 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.858 "name": "raid_bdev1", 00:13:03.858 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:03.858 "strip_size_kb": 0, 00:13:03.858 "state": "online", 00:13:03.858 "raid_level": "raid1", 00:13:03.858 "superblock": true, 00:13:03.858 "num_base_bdevs": 2, 00:13:03.858 "num_base_bdevs_discovered": 2, 00:13:03.859 "num_base_bdevs_operational": 2, 00:13:03.859 "base_bdevs_list": [ 00:13:03.859 { 00:13:03.859 "name": "spare", 00:13:03.859 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:03.859 "is_configured": true, 00:13:03.859 "data_offset": 2048, 00:13:03.859 "data_size": 63488 00:13:03.859 }, 00:13:03.859 { 00:13:03.859 "name": "BaseBdev2", 00:13:03.859 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:03.859 "is_configured": true, 00:13:03.859 "data_offset": 2048, 00:13:03.859 "data_size": 63488 00:13:03.859 } 00:13:03.859 ] 00:13:03.859 }' 00:13:03.859 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.119 09:26:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.119 "name": "raid_bdev1", 00:13:04.119 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:04.119 "strip_size_kb": 0, 00:13:04.119 "state": "online", 00:13:04.119 "raid_level": "raid1", 00:13:04.119 "superblock": true, 00:13:04.119 "num_base_bdevs": 2, 00:13:04.119 "num_base_bdevs_discovered": 2, 00:13:04.119 "num_base_bdevs_operational": 2, 00:13:04.119 "base_bdevs_list": [ 00:13:04.119 { 00:13:04.119 "name": "spare", 00:13:04.119 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:04.119 "is_configured": true, 00:13:04.119 "data_offset": 2048, 00:13:04.119 "data_size": 63488 00:13:04.119 }, 00:13:04.119 { 00:13:04.119 "name": "BaseBdev2", 00:13:04.119 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:04.119 "is_configured": true, 00:13:04.119 "data_offset": 2048, 00:13:04.119 "data_size": 63488 00:13:04.119 } 00:13:04.119 ] 00:13:04.119 }' 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.119 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.379 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.379 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.379 "name": "raid_bdev1", 00:13:04.379 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:04.379 "strip_size_kb": 0, 00:13:04.379 "state": "online", 00:13:04.379 "raid_level": "raid1", 00:13:04.379 "superblock": true, 00:13:04.379 "num_base_bdevs": 2, 00:13:04.379 "num_base_bdevs_discovered": 2, 00:13:04.379 "num_base_bdevs_operational": 2, 00:13:04.379 "base_bdevs_list": [ 00:13:04.379 { 00:13:04.379 "name": "spare", 00:13:04.379 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:04.379 "is_configured": true, 00:13:04.379 "data_offset": 2048, 00:13:04.379 "data_size": 63488 00:13:04.379 }, 00:13:04.379 { 00:13:04.379 "name": "BaseBdev2", 00:13:04.379 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:04.379 "is_configured": true, 00:13:04.379 "data_offset": 2048, 00:13:04.379 "data_size": 63488 00:13:04.379 } 00:13:04.379 ] 00:13:04.379 }' 00:13:04.379 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.379 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.639 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.639 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.639 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.640 [2024-12-12 09:26:38.521351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.640 [2024-12-12 09:26:38.521448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.640 00:13:04.640 Latency(us) 00:13:04.640 [2024-12-12T09:26:38.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.640 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:04.640 raid_bdev1 : 7.75 99.48 298.43 0.00 0.00 13256.62 291.55 134620.67 00:13:04.640 [2024-12-12T09:26:38.663Z] =================================================================================================================== 00:13:04.640 [2024-12-12T09:26:38.663Z] Total : 99.48 298.43 0.00 0.00 13256.62 291.55 134620.67 00:13:04.640 [2024-12-12 09:26:38.615202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.640 [2024-12-12 09:26:38.615315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.640 [2024-12-12 09:26:38.615420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.640 [2024-12-12 09:26:38.615466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:04.640 { 00:13:04.640 "results": [ 00:13:04.640 { 00:13:04.640 "job": "raid_bdev1", 00:13:04.640 "core_mask": "0x1", 00:13:04.640 "workload": "randrw", 00:13:04.640 "percentage": 50, 00:13:04.640 "status": "finished", 00:13:04.640 "queue_depth": 2, 00:13:04.640 "io_size": 3145728, 00:13:04.640 "runtime": 7.750594, 00:13:04.640 "iops": 99.47624659477712, 00:13:04.640 "mibps": 298.42873978433136, 00:13:04.640 "io_failed": 0, 00:13:04.640 "io_timeout": 0, 00:13:04.640 "avg_latency_us": 13256.623882101732, 00:13:04.640 "min_latency_us": 291.54934497816595, 00:13:04.640 "max_latency_us": 134620.67423580785 00:13:04.640 } 00:13:04.640 ], 00:13:04.640 "core_count": 1 00:13:04.640 } 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.640 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:04.900 /dev/nbd0 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.900 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.900 1+0 records in 00:13:04.900 1+0 records out 00:13:04.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287088 s, 14.3 MB/s 00:13:04.901 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.160 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:05.160 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.160 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.161 09:26:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:05.161 /dev/nbd1 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.161 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.421 1+0 records in 00:13:05.421 1+0 records out 00:13:05.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056699 s, 7.2 MB/s 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.421 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.681 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.940 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.940 [2024-12-12 09:26:39.798556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:05.941 [2024-12-12 09:26:39.798620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.941 [2024-12-12 09:26:39.798650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:05.941 [2024-12-12 09:26:39.798660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.941 [2024-12-12 09:26:39.801263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.941 [2024-12-12 09:26:39.801363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:05.941 [2024-12-12 09:26:39.801482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:05.941 [2024-12-12 09:26:39.801546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.941 [2024-12-12 09:26:39.801711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.941 spare 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 [2024-12-12 09:26:39.901609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:05.941 [2024-12-12 09:26:39.901640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:05.941 [2024-12-12 09:26:39.901929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:05.941 [2024-12-12 09:26:39.902152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:05.941 [2024-12-12 09:26:39.902162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:05.941 [2024-12-12 09:26:39.902371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.941 "name": "raid_bdev1", 00:13:05.941 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:05.941 "strip_size_kb": 0, 00:13:05.941 "state": "online", 00:13:05.941 "raid_level": "raid1", 00:13:05.941 "superblock": true, 00:13:05.941 "num_base_bdevs": 2, 00:13:05.941 "num_base_bdevs_discovered": 2, 00:13:05.941 "num_base_bdevs_operational": 2, 00:13:05.941 "base_bdevs_list": [ 00:13:05.941 { 00:13:05.941 "name": "spare", 00:13:05.941 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:05.941 "is_configured": true, 00:13:05.941 "data_offset": 2048, 00:13:05.941 "data_size": 63488 00:13:05.941 }, 00:13:05.941 { 00:13:05.941 "name": "BaseBdev2", 00:13:05.941 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:05.941 "is_configured": true, 00:13:05.941 "data_offset": 2048, 00:13:05.941 "data_size": 63488 00:13:05.941 } 00:13:05.941 ] 00:13:05.941 }' 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.941 09:26:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.510 "name": "raid_bdev1", 00:13:06.510 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:06.510 "strip_size_kb": 0, 00:13:06.510 "state": "online", 00:13:06.510 "raid_level": "raid1", 00:13:06.510 "superblock": true, 00:13:06.510 "num_base_bdevs": 2, 00:13:06.510 "num_base_bdevs_discovered": 2, 00:13:06.510 "num_base_bdevs_operational": 2, 00:13:06.510 "base_bdevs_list": [ 00:13:06.510 { 00:13:06.510 "name": "spare", 00:13:06.510 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:06.510 "is_configured": true, 00:13:06.510 "data_offset": 2048, 00:13:06.510 "data_size": 63488 00:13:06.510 }, 00:13:06.510 { 00:13:06.510 "name": "BaseBdev2", 00:13:06.510 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:06.510 "is_configured": true, 00:13:06.510 "data_offset": 2048, 00:13:06.510 "data_size": 63488 00:13:06.510 } 00:13:06.510 ] 00:13:06.510 }' 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.510 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.511 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:06.511 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.511 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.511 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.511 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.770 [2024-12-12 09:26:40.541450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.770 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.771 "name": "raid_bdev1", 00:13:06.771 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:06.771 "strip_size_kb": 0, 00:13:06.771 "state": "online", 00:13:06.771 "raid_level": "raid1", 00:13:06.771 "superblock": true, 00:13:06.771 "num_base_bdevs": 2, 00:13:06.771 "num_base_bdevs_discovered": 1, 00:13:06.771 "num_base_bdevs_operational": 1, 00:13:06.771 "base_bdevs_list": [ 00:13:06.771 { 00:13:06.771 "name": null, 00:13:06.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.771 "is_configured": false, 00:13:06.771 "data_offset": 0, 00:13:06.771 "data_size": 63488 00:13:06.771 }, 00:13:06.771 { 00:13:06.771 "name": "BaseBdev2", 00:13:06.771 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:06.771 "is_configured": true, 00:13:06.771 "data_offset": 2048, 00:13:06.771 "data_size": 63488 00:13:06.771 } 00:13:06.771 ] 00:13:06.771 }' 00:13:06.771 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.771 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.031 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.031 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.031 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.031 [2024-12-12 09:26:40.936841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.031 [2024-12-12 09:26:40.937089] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:07.031 [2024-12-12 09:26:40.937155] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:07.031 [2024-12-12 09:26:40.937215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.031 [2024-12-12 09:26:40.955520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:07.031 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.031 09:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:07.031 [2024-12-12 09:26:40.957736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 09:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.229 "name": "raid_bdev1", 00:13:08.229 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:08.229 "strip_size_kb": 0, 00:13:08.229 "state": "online", 00:13:08.229 "raid_level": "raid1", 00:13:08.229 "superblock": true, 00:13:08.229 "num_base_bdevs": 2, 00:13:08.229 "num_base_bdevs_discovered": 2, 00:13:08.229 "num_base_bdevs_operational": 2, 00:13:08.229 "process": { 00:13:08.229 "type": "rebuild", 00:13:08.229 "target": "spare", 00:13:08.229 "progress": { 00:13:08.229 "blocks": 20480, 00:13:08.229 "percent": 32 00:13:08.229 } 00:13:08.229 }, 00:13:08.229 "base_bdevs_list": [ 00:13:08.229 { 00:13:08.229 "name": "spare", 00:13:08.229 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:08.229 "is_configured": true, 00:13:08.229 "data_offset": 2048, 00:13:08.229 "data_size": 63488 00:13:08.229 }, 00:13:08.229 { 00:13:08.229 "name": "BaseBdev2", 00:13:08.229 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:08.229 "is_configured": true, 00:13:08.229 "data_offset": 2048, 00:13:08.229 "data_size": 63488 00:13:08.229 } 00:13:08.229 ] 00:13:08.229 }' 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.229 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.230 [2024-12-12 09:26:42.096894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.230 [2024-12-12 09:26:42.166356] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.230 [2024-12-12 09:26:42.166415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.230 [2024-12-12 09:26:42.166430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.230 [2024-12-12 09:26:42.166440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.230 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.489 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.489 "name": "raid_bdev1", 00:13:08.489 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:08.489 "strip_size_kb": 0, 00:13:08.489 "state": "online", 00:13:08.489 "raid_level": "raid1", 00:13:08.489 "superblock": true, 00:13:08.489 "num_base_bdevs": 2, 00:13:08.489 "num_base_bdevs_discovered": 1, 00:13:08.489 "num_base_bdevs_operational": 1, 00:13:08.489 "base_bdevs_list": [ 00:13:08.489 { 00:13:08.489 "name": null, 00:13:08.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.489 "is_configured": false, 00:13:08.489 "data_offset": 0, 00:13:08.489 "data_size": 63488 00:13:08.489 }, 00:13:08.489 { 00:13:08.489 "name": "BaseBdev2", 00:13:08.489 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:08.489 "is_configured": true, 00:13:08.489 "data_offset": 2048, 00:13:08.489 "data_size": 63488 00:13:08.489 } 00:13:08.489 ] 00:13:08.489 }' 00:13:08.489 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.489 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.749 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.749 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.749 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.749 [2024-12-12 09:26:42.681096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.749 [2024-12-12 09:26:42.681233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.749 [2024-12-12 09:26:42.681280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:08.749 [2024-12-12 09:26:42.681315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.749 [2024-12-12 09:26:42.681884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.749 [2024-12-12 09:26:42.681948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.749 [2024-12-12 09:26:42.682093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:08.749 [2024-12-12 09:26:42.682140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:08.749 [2024-12-12 09:26:42.682178] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:08.749 [2024-12-12 09:26:42.682257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.749 [2024-12-12 09:26:42.699841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:08.749 spare 00:13:08.749 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.749 09:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:08.749 [2024-12-12 09:26:42.702047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.688 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.688 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.688 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.688 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.688 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.948 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.948 "name": "raid_bdev1", 00:13:09.948 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:09.948 "strip_size_kb": 0, 00:13:09.948 "state": "online", 00:13:09.948 "raid_level": "raid1", 00:13:09.948 "superblock": true, 00:13:09.948 "num_base_bdevs": 2, 00:13:09.948 "num_base_bdevs_discovered": 2, 00:13:09.948 "num_base_bdevs_operational": 2, 00:13:09.948 "process": { 00:13:09.948 "type": "rebuild", 00:13:09.948 "target": "spare", 00:13:09.948 "progress": { 00:13:09.948 "blocks": 20480, 00:13:09.948 "percent": 32 00:13:09.948 } 00:13:09.948 }, 00:13:09.948 "base_bdevs_list": [ 00:13:09.948 { 00:13:09.948 "name": "spare", 00:13:09.948 "uuid": "c9517551-c72b-5d6c-9af1-fba13136665e", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 }, 00:13:09.948 { 00:13:09.948 "name": "BaseBdev2", 00:13:09.948 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:09.948 "is_configured": true, 00:13:09.948 "data_offset": 2048, 00:13:09.948 "data_size": 63488 00:13:09.948 } 00:13:09.948 ] 00:13:09.948 }' 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.949 [2024-12-12 09:26:43.861259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.949 [2024-12-12 09:26:43.910759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:09.949 [2024-12-12 09:26:43.910882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.949 [2024-12-12 09:26:43.910904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.949 [2024-12-12 09:26:43.910911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.949 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.209 09:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.209 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.209 "name": "raid_bdev1", 00:13:10.209 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:10.209 "strip_size_kb": 0, 00:13:10.209 "state": "online", 00:13:10.209 "raid_level": "raid1", 00:13:10.209 "superblock": true, 00:13:10.209 "num_base_bdevs": 2, 00:13:10.209 "num_base_bdevs_discovered": 1, 00:13:10.209 "num_base_bdevs_operational": 1, 00:13:10.209 "base_bdevs_list": [ 00:13:10.209 { 00:13:10.209 "name": null, 00:13:10.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.209 "is_configured": false, 00:13:10.209 "data_offset": 0, 00:13:10.209 "data_size": 63488 00:13:10.209 }, 00:13:10.209 { 00:13:10.209 "name": "BaseBdev2", 00:13:10.209 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:10.209 "is_configured": true, 00:13:10.209 "data_offset": 2048, 00:13:10.209 "data_size": 63488 00:13:10.209 } 00:13:10.209 ] 00:13:10.209 }' 00:13:10.209 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.209 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.469 "name": "raid_bdev1", 00:13:10.469 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:10.469 "strip_size_kb": 0, 00:13:10.469 "state": "online", 00:13:10.469 "raid_level": "raid1", 00:13:10.469 "superblock": true, 00:13:10.469 "num_base_bdevs": 2, 00:13:10.469 "num_base_bdevs_discovered": 1, 00:13:10.469 "num_base_bdevs_operational": 1, 00:13:10.469 "base_bdevs_list": [ 00:13:10.469 { 00:13:10.469 "name": null, 00:13:10.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.469 "is_configured": false, 00:13:10.469 "data_offset": 0, 00:13:10.469 "data_size": 63488 00:13:10.469 }, 00:13:10.469 { 00:13:10.469 "name": "BaseBdev2", 00:13:10.469 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:10.469 "is_configured": true, 00:13:10.469 "data_offset": 2048, 00:13:10.469 "data_size": 63488 00:13:10.469 } 00:13:10.469 ] 00:13:10.469 }' 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.469 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.729 [2024-12-12 09:26:44.525519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.729 [2024-12-12 09:26:44.525584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.729 [2024-12-12 09:26:44.525615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:10.729 [2024-12-12 09:26:44.525623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.729 [2024-12-12 09:26:44.526147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.729 [2024-12-12 09:26:44.526165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.729 [2024-12-12 09:26:44.526258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:10.729 [2024-12-12 09:26:44.526293] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.729 [2024-12-12 09:26:44.526304] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:10.729 [2024-12-12 09:26:44.526315] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:10.729 BaseBdev1 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.729 09:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.671 "name": "raid_bdev1", 00:13:11.671 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:11.671 "strip_size_kb": 0, 00:13:11.671 "state": "online", 00:13:11.671 "raid_level": "raid1", 00:13:11.671 "superblock": true, 00:13:11.671 "num_base_bdevs": 2, 00:13:11.671 "num_base_bdevs_discovered": 1, 00:13:11.671 "num_base_bdevs_operational": 1, 00:13:11.671 "base_bdevs_list": [ 00:13:11.671 { 00:13:11.671 "name": null, 00:13:11.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.671 "is_configured": false, 00:13:11.671 "data_offset": 0, 00:13:11.671 "data_size": 63488 00:13:11.671 }, 00:13:11.671 { 00:13:11.671 "name": "BaseBdev2", 00:13:11.671 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:11.671 "is_configured": true, 00:13:11.671 "data_offset": 2048, 00:13:11.671 "data_size": 63488 00:13:11.671 } 00:13:11.671 ] 00:13:11.671 }' 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.671 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.241 "name": "raid_bdev1", 00:13:12.241 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:12.241 "strip_size_kb": 0, 00:13:12.241 "state": "online", 00:13:12.241 "raid_level": "raid1", 00:13:12.241 "superblock": true, 00:13:12.241 "num_base_bdevs": 2, 00:13:12.241 "num_base_bdevs_discovered": 1, 00:13:12.241 "num_base_bdevs_operational": 1, 00:13:12.241 "base_bdevs_list": [ 00:13:12.241 { 00:13:12.241 "name": null, 00:13:12.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.241 "is_configured": false, 00:13:12.241 "data_offset": 0, 00:13:12.241 "data_size": 63488 00:13:12.241 }, 00:13:12.241 { 00:13:12.241 "name": "BaseBdev2", 00:13:12.241 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:12.241 "is_configured": true, 00:13:12.241 "data_offset": 2048, 00:13:12.241 "data_size": 63488 00:13:12.241 } 00:13:12.241 ] 00:13:12.241 }' 00:13:12.241 09:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.241 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.242 [2024-12-12 09:26:46.107004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.242 [2024-12-12 09:26:46.107240] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:12.242 [2024-12-12 09:26:46.107302] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:12.242 request: 00:13:12.242 { 00:13:12.242 "base_bdev": "BaseBdev1", 00:13:12.242 "raid_bdev": "raid_bdev1", 00:13:12.242 "method": "bdev_raid_add_base_bdev", 00:13:12.242 "req_id": 1 00:13:12.242 } 00:13:12.242 Got JSON-RPC error response 00:13:12.242 response: 00:13:12.242 { 00:13:12.242 "code": -22, 00:13:12.242 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:12.242 } 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.242 09:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.180 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.180 "name": "raid_bdev1", 00:13:13.180 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:13.180 "strip_size_kb": 0, 00:13:13.180 "state": "online", 00:13:13.180 "raid_level": "raid1", 00:13:13.180 "superblock": true, 00:13:13.181 "num_base_bdevs": 2, 00:13:13.181 "num_base_bdevs_discovered": 1, 00:13:13.181 "num_base_bdevs_operational": 1, 00:13:13.181 "base_bdevs_list": [ 00:13:13.181 { 00:13:13.181 "name": null, 00:13:13.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.181 "is_configured": false, 00:13:13.181 "data_offset": 0, 00:13:13.181 "data_size": 63488 00:13:13.181 }, 00:13:13.181 { 00:13:13.181 "name": "BaseBdev2", 00:13:13.181 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:13.181 "is_configured": true, 00:13:13.181 "data_offset": 2048, 00:13:13.181 "data_size": 63488 00:13:13.181 } 00:13:13.181 ] 00:13:13.181 }' 00:13:13.181 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.181 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.750 "name": "raid_bdev1", 00:13:13.750 "uuid": "32fff911-ea9f-4710-97dc-863525e41f01", 00:13:13.750 "strip_size_kb": 0, 00:13:13.750 "state": "online", 00:13:13.750 "raid_level": "raid1", 00:13:13.750 "superblock": true, 00:13:13.750 "num_base_bdevs": 2, 00:13:13.750 "num_base_bdevs_discovered": 1, 00:13:13.750 "num_base_bdevs_operational": 1, 00:13:13.750 "base_bdevs_list": [ 00:13:13.750 { 00:13:13.750 "name": null, 00:13:13.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.750 "is_configured": false, 00:13:13.750 "data_offset": 0, 00:13:13.750 "data_size": 63488 00:13:13.750 }, 00:13:13.750 { 00:13:13.750 "name": "BaseBdev2", 00:13:13.750 "uuid": "2c32fbdf-c583-5c90-a75c-d2e00cb7d686", 00:13:13.750 "is_configured": true, 00:13:13.750 "data_offset": 2048, 00:13:13.750 "data_size": 63488 00:13:13.750 } 00:13:13.750 ] 00:13:13.750 }' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77989 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77989 ']' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77989 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77989 00:13:13.750 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.750 killing process with pid 77989 00:13:13.750 Received shutdown signal, test time was about 16.932564 seconds 00:13:13.750 00:13:13.750 Latency(us) 00:13:13.750 [2024-12-12T09:26:47.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.750 [2024-12-12T09:26:47.773Z] =================================================================================================================== 00:13:13.750 [2024-12-12T09:26:47.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.751 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.751 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77989' 00:13:13.751 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77989 00:13:13.751 [2024-12-12 09:26:47.757404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.751 [2024-12-12 09:26:47.757544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.751 09:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77989 00:13:13.751 [2024-12-12 09:26:47.757604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.751 [2024-12-12 09:26:47.757618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:14.010 [2024-12-12 09:26:47.996397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:15.390 00:13:15.390 real 0m20.210s 00:13:15.390 user 0m26.264s 00:13:15.390 sys 0m2.295s 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 ************************************ 00:13:15.390 END TEST raid_rebuild_test_sb_io 00:13:15.390 ************************************ 00:13:15.390 09:26:49 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:15.390 09:26:49 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:15.390 09:26:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:15.390 09:26:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.390 09:26:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 ************************************ 00:13:15.390 START TEST raid_rebuild_test 00:13:15.390 ************************************ 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78683 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78683 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78683 ']' 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.390 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.391 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.391 09:26:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.650 Zero copy mechanism will not be used. 00:13:15.650 [2024-12-12 09:26:49.427011] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:15.650 [2024-12-12 09:26:49.427131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78683 ] 00:13:15.650 [2024-12-12 09:26:49.607208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.910 [2024-12-12 09:26:49.737780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.169 [2024-12-12 09:26:49.968943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.169 [2024-12-12 09:26:49.969014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.429 BaseBdev1_malloc 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.429 [2024-12-12 09:26:50.287741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.429 [2024-12-12 09:26:50.287810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.429 [2024-12-12 09:26:50.287836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:16.429 [2024-12-12 09:26:50.287848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.429 [2024-12-12 09:26:50.290335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.429 [2024-12-12 09:26:50.290375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.429 BaseBdev1 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.429 BaseBdev2_malloc 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.429 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.429 [2024-12-12 09:26:50.349361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:16.430 [2024-12-12 09:26:50.349441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.430 [2024-12-12 09:26:50.349463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:16.430 [2024-12-12 09:26:50.349476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.430 [2024-12-12 09:26:50.351862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.430 [2024-12-12 09:26:50.351978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.430 BaseBdev2 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.430 BaseBdev3_malloc 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.430 [2024-12-12 09:26:50.441806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:16.430 [2024-12-12 09:26:50.441917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.430 [2024-12-12 09:26:50.441966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:16.430 [2024-12-12 09:26:50.442019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.430 [2024-12-12 09:26:50.444365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.430 [2024-12-12 09:26:50.444452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.430 BaseBdev3 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.430 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 BaseBdev4_malloc 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 [2024-12-12 09:26:50.502751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:16.690 [2024-12-12 09:26:50.502824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.690 [2024-12-12 09:26:50.502845] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:16.690 [2024-12-12 09:26:50.502857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.690 [2024-12-12 09:26:50.505235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.690 [2024-12-12 09:26:50.505313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:16.690 BaseBdev4 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 spare_malloc 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 spare_delay 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 [2024-12-12 09:26:50.574637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.690 [2024-12-12 09:26:50.574691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.690 [2024-12-12 09:26:50.574708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:16.690 [2024-12-12 09:26:50.574719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.690 [2024-12-12 09:26:50.577143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.690 [2024-12-12 09:26:50.577180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.690 spare 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.690 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.690 [2024-12-12 09:26:50.586668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.690 [2024-12-12 09:26:50.588697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.690 [2024-12-12 09:26:50.588757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.690 [2024-12-12 09:26:50.588806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.690 [2024-12-12 09:26:50.588902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:16.690 [2024-12-12 09:26:50.588919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:16.690 [2024-12-12 09:26:50.589266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:16.690 [2024-12-12 09:26:50.589494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:16.691 [2024-12-12 09:26:50.589544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:16.691 [2024-12-12 09:26:50.589755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.691 "name": "raid_bdev1", 00:13:16.691 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:16.691 "strip_size_kb": 0, 00:13:16.691 "state": "online", 00:13:16.691 "raid_level": "raid1", 00:13:16.691 "superblock": false, 00:13:16.691 "num_base_bdevs": 4, 00:13:16.691 "num_base_bdevs_discovered": 4, 00:13:16.691 "num_base_bdevs_operational": 4, 00:13:16.691 "base_bdevs_list": [ 00:13:16.691 { 00:13:16.691 "name": "BaseBdev1", 00:13:16.691 "uuid": "7a83fa89-c7fa-54d4-b7fd-56d0bda47881", 00:13:16.691 "is_configured": true, 00:13:16.691 "data_offset": 0, 00:13:16.691 "data_size": 65536 00:13:16.691 }, 00:13:16.691 { 00:13:16.691 "name": "BaseBdev2", 00:13:16.691 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:16.691 "is_configured": true, 00:13:16.691 "data_offset": 0, 00:13:16.691 "data_size": 65536 00:13:16.691 }, 00:13:16.691 { 00:13:16.691 "name": "BaseBdev3", 00:13:16.691 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:16.691 "is_configured": true, 00:13:16.691 "data_offset": 0, 00:13:16.691 "data_size": 65536 00:13:16.691 }, 00:13:16.691 { 00:13:16.691 "name": "BaseBdev4", 00:13:16.691 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:16.691 "is_configured": true, 00:13:16.691 "data_offset": 0, 00:13:16.691 "data_size": 65536 00:13:16.691 } 00:13:16.691 ] 00:13:16.691 }' 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.691 09:26:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.261 [2024-12-12 09:26:51.054260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.261 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:17.521 [2024-12-12 09:26:51.329458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:17.521 /dev/nbd0 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.521 1+0 records in 00:13:17.521 1+0 records out 00:13:17.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302413 s, 13.5 MB/s 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:17.521 09:26:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:24.114 65536+0 records in 00:13:24.114 65536+0 records out 00:13:24.114 33554432 bytes (34 MB, 32 MiB) copied, 5.77622 s, 5.8 MB/s 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.114 [2024-12-12 09:26:57.376688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 [2024-12-12 09:26:57.412724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.114 "name": "raid_bdev1", 00:13:24.114 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:24.114 "strip_size_kb": 0, 00:13:24.114 "state": "online", 00:13:24.114 "raid_level": "raid1", 00:13:24.114 "superblock": false, 00:13:24.114 "num_base_bdevs": 4, 00:13:24.114 "num_base_bdevs_discovered": 3, 00:13:24.114 "num_base_bdevs_operational": 3, 00:13:24.114 "base_bdevs_list": [ 00:13:24.114 { 00:13:24.114 "name": null, 00:13:24.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.114 "is_configured": false, 00:13:24.114 "data_offset": 0, 00:13:24.114 "data_size": 65536 00:13:24.114 }, 00:13:24.114 { 00:13:24.114 "name": "BaseBdev2", 00:13:24.114 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:24.114 "is_configured": true, 00:13:24.114 "data_offset": 0, 00:13:24.114 "data_size": 65536 00:13:24.114 }, 00:13:24.114 { 00:13:24.114 "name": "BaseBdev3", 00:13:24.114 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:24.114 "is_configured": true, 00:13:24.114 "data_offset": 0, 00:13:24.114 "data_size": 65536 00:13:24.114 }, 00:13:24.114 { 00:13:24.114 "name": "BaseBdev4", 00:13:24.114 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:24.114 "is_configured": true, 00:13:24.114 "data_offset": 0, 00:13:24.114 "data_size": 65536 00:13:24.114 } 00:13:24.114 ] 00:13:24.114 }' 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 [2024-12-12 09:26:57.796051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.114 [2024-12-12 09:26:57.811177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.114 09:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:24.114 [2024-12-12 09:26:57.813362] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.053 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.053 "name": "raid_bdev1", 00:13:25.053 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:25.053 "strip_size_kb": 0, 00:13:25.053 "state": "online", 00:13:25.053 "raid_level": "raid1", 00:13:25.053 "superblock": false, 00:13:25.053 "num_base_bdevs": 4, 00:13:25.053 "num_base_bdevs_discovered": 4, 00:13:25.053 "num_base_bdevs_operational": 4, 00:13:25.053 "process": { 00:13:25.053 "type": "rebuild", 00:13:25.053 "target": "spare", 00:13:25.053 "progress": { 00:13:25.053 "blocks": 20480, 00:13:25.053 "percent": 31 00:13:25.054 } 00:13:25.054 }, 00:13:25.054 "base_bdevs_list": [ 00:13:25.054 { 00:13:25.054 "name": "spare", 00:13:25.054 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:25.054 "is_configured": true, 00:13:25.054 "data_offset": 0, 00:13:25.054 "data_size": 65536 00:13:25.054 }, 00:13:25.054 { 00:13:25.054 "name": "BaseBdev2", 00:13:25.054 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:25.054 "is_configured": true, 00:13:25.054 "data_offset": 0, 00:13:25.054 "data_size": 65536 00:13:25.054 }, 00:13:25.054 { 00:13:25.054 "name": "BaseBdev3", 00:13:25.054 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:25.054 "is_configured": true, 00:13:25.054 "data_offset": 0, 00:13:25.054 "data_size": 65536 00:13:25.054 }, 00:13:25.054 { 00:13:25.054 "name": "BaseBdev4", 00:13:25.054 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:25.054 "is_configured": true, 00:13:25.054 "data_offset": 0, 00:13:25.054 "data_size": 65536 00:13:25.054 } 00:13:25.054 ] 00:13:25.054 }' 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.054 09:26:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.054 [2024-12-12 09:26:58.972528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.054 [2024-12-12 09:26:59.022120] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.054 [2024-12-12 09:26:59.022252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.054 [2024-12-12 09:26:59.022290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.054 [2024-12-12 09:26:59.022315] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.054 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.313 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.313 "name": "raid_bdev1", 00:13:25.313 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:25.313 "strip_size_kb": 0, 00:13:25.313 "state": "online", 00:13:25.313 "raid_level": "raid1", 00:13:25.313 "superblock": false, 00:13:25.313 "num_base_bdevs": 4, 00:13:25.313 "num_base_bdevs_discovered": 3, 00:13:25.313 "num_base_bdevs_operational": 3, 00:13:25.313 "base_bdevs_list": [ 00:13:25.313 { 00:13:25.313 "name": null, 00:13:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.313 "is_configured": false, 00:13:25.313 "data_offset": 0, 00:13:25.313 "data_size": 65536 00:13:25.313 }, 00:13:25.313 { 00:13:25.313 "name": "BaseBdev2", 00:13:25.313 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:25.313 "is_configured": true, 00:13:25.313 "data_offset": 0, 00:13:25.313 "data_size": 65536 00:13:25.313 }, 00:13:25.313 { 00:13:25.313 "name": "BaseBdev3", 00:13:25.313 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:25.313 "is_configured": true, 00:13:25.313 "data_offset": 0, 00:13:25.313 "data_size": 65536 00:13:25.313 }, 00:13:25.313 { 00:13:25.313 "name": "BaseBdev4", 00:13:25.313 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:25.313 "is_configured": true, 00:13:25.313 "data_offset": 0, 00:13:25.313 "data_size": 65536 00:13:25.313 } 00:13:25.313 ] 00:13:25.313 }' 00:13:25.313 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.313 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.574 "name": "raid_bdev1", 00:13:25.574 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:25.574 "strip_size_kb": 0, 00:13:25.574 "state": "online", 00:13:25.574 "raid_level": "raid1", 00:13:25.574 "superblock": false, 00:13:25.574 "num_base_bdevs": 4, 00:13:25.574 "num_base_bdevs_discovered": 3, 00:13:25.574 "num_base_bdevs_operational": 3, 00:13:25.574 "base_bdevs_list": [ 00:13:25.574 { 00:13:25.574 "name": null, 00:13:25.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.574 "is_configured": false, 00:13:25.574 "data_offset": 0, 00:13:25.574 "data_size": 65536 00:13:25.574 }, 00:13:25.574 { 00:13:25.574 "name": "BaseBdev2", 00:13:25.574 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:25.574 "is_configured": true, 00:13:25.574 "data_offset": 0, 00:13:25.574 "data_size": 65536 00:13:25.574 }, 00:13:25.574 { 00:13:25.574 "name": "BaseBdev3", 00:13:25.574 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:25.574 "is_configured": true, 00:13:25.574 "data_offset": 0, 00:13:25.574 "data_size": 65536 00:13:25.574 }, 00:13:25.574 { 00:13:25.574 "name": "BaseBdev4", 00:13:25.574 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:25.574 "is_configured": true, 00:13:25.574 "data_offset": 0, 00:13:25.574 "data_size": 65536 00:13:25.574 } 00:13:25.574 ] 00:13:25.574 }' 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.574 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.834 [2024-12-12 09:26:59.651791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.834 [2024-12-12 09:26:59.666487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.834 09:26:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:25.834 [2024-12-12 09:26:59.668646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.774 "name": "raid_bdev1", 00:13:26.774 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:26.774 "strip_size_kb": 0, 00:13:26.774 "state": "online", 00:13:26.774 "raid_level": "raid1", 00:13:26.774 "superblock": false, 00:13:26.774 "num_base_bdevs": 4, 00:13:26.774 "num_base_bdevs_discovered": 4, 00:13:26.774 "num_base_bdevs_operational": 4, 00:13:26.774 "process": { 00:13:26.774 "type": "rebuild", 00:13:26.774 "target": "spare", 00:13:26.774 "progress": { 00:13:26.774 "blocks": 20480, 00:13:26.774 "percent": 31 00:13:26.774 } 00:13:26.774 }, 00:13:26.774 "base_bdevs_list": [ 00:13:26.774 { 00:13:26.774 "name": "spare", 00:13:26.774 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:26.774 "is_configured": true, 00:13:26.774 "data_offset": 0, 00:13:26.774 "data_size": 65536 00:13:26.774 }, 00:13:26.774 { 00:13:26.774 "name": "BaseBdev2", 00:13:26.774 "uuid": "ca9066ea-1d1b-5085-9e60-c0e358bc6a7c", 00:13:26.774 "is_configured": true, 00:13:26.774 "data_offset": 0, 00:13:26.774 "data_size": 65536 00:13:26.774 }, 00:13:26.774 { 00:13:26.774 "name": "BaseBdev3", 00:13:26.774 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:26.774 "is_configured": true, 00:13:26.774 "data_offset": 0, 00:13:26.774 "data_size": 65536 00:13:26.774 }, 00:13:26.774 { 00:13:26.774 "name": "BaseBdev4", 00:13:26.774 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:26.774 "is_configured": true, 00:13:26.774 "data_offset": 0, 00:13:26.774 "data_size": 65536 00:13:26.774 } 00:13:26.774 ] 00:13:26.774 }' 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.774 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.034 [2024-12-12 09:27:00.832687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.034 [2024-12-12 09:27:00.877456] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.034 "name": "raid_bdev1", 00:13:27.034 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:27.034 "strip_size_kb": 0, 00:13:27.034 "state": "online", 00:13:27.034 "raid_level": "raid1", 00:13:27.034 "superblock": false, 00:13:27.034 "num_base_bdevs": 4, 00:13:27.034 "num_base_bdevs_discovered": 3, 00:13:27.034 "num_base_bdevs_operational": 3, 00:13:27.034 "process": { 00:13:27.034 "type": "rebuild", 00:13:27.034 "target": "spare", 00:13:27.034 "progress": { 00:13:27.034 "blocks": 24576, 00:13:27.034 "percent": 37 00:13:27.034 } 00:13:27.034 }, 00:13:27.034 "base_bdevs_list": [ 00:13:27.034 { 00:13:27.034 "name": "spare", 00:13:27.034 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:27.034 "is_configured": true, 00:13:27.034 "data_offset": 0, 00:13:27.034 "data_size": 65536 00:13:27.034 }, 00:13:27.034 { 00:13:27.034 "name": null, 00:13:27.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.034 "is_configured": false, 00:13:27.034 "data_offset": 0, 00:13:27.034 "data_size": 65536 00:13:27.034 }, 00:13:27.034 { 00:13:27.034 "name": "BaseBdev3", 00:13:27.034 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:27.034 "is_configured": true, 00:13:27.034 "data_offset": 0, 00:13:27.034 "data_size": 65536 00:13:27.034 }, 00:13:27.034 { 00:13:27.034 "name": "BaseBdev4", 00:13:27.034 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:27.034 "is_configured": true, 00:13:27.034 "data_offset": 0, 00:13:27.034 "data_size": 65536 00:13:27.034 } 00:13:27.034 ] 00:13:27.034 }' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.034 09:27:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.035 09:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.294 "name": "raid_bdev1", 00:13:27.294 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:27.294 "strip_size_kb": 0, 00:13:27.294 "state": "online", 00:13:27.294 "raid_level": "raid1", 00:13:27.294 "superblock": false, 00:13:27.294 "num_base_bdevs": 4, 00:13:27.294 "num_base_bdevs_discovered": 3, 00:13:27.294 "num_base_bdevs_operational": 3, 00:13:27.294 "process": { 00:13:27.294 "type": "rebuild", 00:13:27.294 "target": "spare", 00:13:27.294 "progress": { 00:13:27.294 "blocks": 26624, 00:13:27.294 "percent": 40 00:13:27.294 } 00:13:27.294 }, 00:13:27.294 "base_bdevs_list": [ 00:13:27.294 { 00:13:27.294 "name": "spare", 00:13:27.294 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:27.294 "is_configured": true, 00:13:27.294 "data_offset": 0, 00:13:27.294 "data_size": 65536 00:13:27.294 }, 00:13:27.294 { 00:13:27.294 "name": null, 00:13:27.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.294 "is_configured": false, 00:13:27.294 "data_offset": 0, 00:13:27.294 "data_size": 65536 00:13:27.294 }, 00:13:27.294 { 00:13:27.294 "name": "BaseBdev3", 00:13:27.294 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:27.294 "is_configured": true, 00:13:27.294 "data_offset": 0, 00:13:27.294 "data_size": 65536 00:13:27.294 }, 00:13:27.294 { 00:13:27.294 "name": "BaseBdev4", 00:13:27.294 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:27.294 "is_configured": true, 00:13:27.294 "data_offset": 0, 00:13:27.294 "data_size": 65536 00:13:27.294 } 00:13:27.294 ] 00:13:27.294 }' 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.294 09:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.234 "name": "raid_bdev1", 00:13:28.234 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:28.234 "strip_size_kb": 0, 00:13:28.234 "state": "online", 00:13:28.234 "raid_level": "raid1", 00:13:28.234 "superblock": false, 00:13:28.234 "num_base_bdevs": 4, 00:13:28.234 "num_base_bdevs_discovered": 3, 00:13:28.234 "num_base_bdevs_operational": 3, 00:13:28.234 "process": { 00:13:28.234 "type": "rebuild", 00:13:28.234 "target": "spare", 00:13:28.234 "progress": { 00:13:28.234 "blocks": 49152, 00:13:28.234 "percent": 75 00:13:28.234 } 00:13:28.234 }, 00:13:28.234 "base_bdevs_list": [ 00:13:28.234 { 00:13:28.234 "name": "spare", 00:13:28.234 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 65536 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": null, 00:13:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.234 "is_configured": false, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 65536 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev3", 00:13:28.234 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 65536 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev4", 00:13:28.234 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 65536 00:13:28.234 } 00:13:28.234 ] 00:13:28.234 }' 00:13:28.234 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.494 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.494 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.494 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.494 09:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.063 [2024-12-12 09:27:02.892081] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:29.063 [2024-12-12 09:27:02.892204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:29.063 [2024-12-12 09:27:02.892284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.323 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.583 "name": "raid_bdev1", 00:13:29.583 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:29.583 "strip_size_kb": 0, 00:13:29.583 "state": "online", 00:13:29.583 "raid_level": "raid1", 00:13:29.583 "superblock": false, 00:13:29.583 "num_base_bdevs": 4, 00:13:29.583 "num_base_bdevs_discovered": 3, 00:13:29.583 "num_base_bdevs_operational": 3, 00:13:29.583 "base_bdevs_list": [ 00:13:29.583 { 00:13:29.583 "name": "spare", 00:13:29.583 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:29.583 "is_configured": true, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 }, 00:13:29.583 { 00:13:29.583 "name": null, 00:13:29.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.583 "is_configured": false, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 }, 00:13:29.583 { 00:13:29.583 "name": "BaseBdev3", 00:13:29.583 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:29.583 "is_configured": true, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 }, 00:13:29.583 { 00:13:29.583 "name": "BaseBdev4", 00:13:29.583 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:29.583 "is_configured": true, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 } 00:13:29.583 ] 00:13:29.583 }' 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.583 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.583 "name": "raid_bdev1", 00:13:29.583 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:29.583 "strip_size_kb": 0, 00:13:29.583 "state": "online", 00:13:29.583 "raid_level": "raid1", 00:13:29.583 "superblock": false, 00:13:29.583 "num_base_bdevs": 4, 00:13:29.583 "num_base_bdevs_discovered": 3, 00:13:29.583 "num_base_bdevs_operational": 3, 00:13:29.583 "base_bdevs_list": [ 00:13:29.583 { 00:13:29.583 "name": "spare", 00:13:29.583 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:29.583 "is_configured": true, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 }, 00:13:29.583 { 00:13:29.583 "name": null, 00:13:29.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.583 "is_configured": false, 00:13:29.583 "data_offset": 0, 00:13:29.583 "data_size": 65536 00:13:29.583 }, 00:13:29.583 { 00:13:29.583 "name": "BaseBdev3", 00:13:29.583 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:29.583 "is_configured": true, 00:13:29.583 "data_offset": 0, 00:13:29.584 "data_size": 65536 00:13:29.584 }, 00:13:29.584 { 00:13:29.584 "name": "BaseBdev4", 00:13:29.584 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:29.584 "is_configured": true, 00:13:29.584 "data_offset": 0, 00:13:29.584 "data_size": 65536 00:13:29.584 } 00:13:29.584 ] 00:13:29.584 }' 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.584 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.844 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.844 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.844 "name": "raid_bdev1", 00:13:29.844 "uuid": "ace03650-e248-44a5-a203-b54a074468ac", 00:13:29.844 "strip_size_kb": 0, 00:13:29.844 "state": "online", 00:13:29.844 "raid_level": "raid1", 00:13:29.844 "superblock": false, 00:13:29.844 "num_base_bdevs": 4, 00:13:29.844 "num_base_bdevs_discovered": 3, 00:13:29.844 "num_base_bdevs_operational": 3, 00:13:29.844 "base_bdevs_list": [ 00:13:29.844 { 00:13:29.844 "name": "spare", 00:13:29.844 "uuid": "eb1912b3-bb64-5df4-9d82-a42eb2e09a03", 00:13:29.844 "is_configured": true, 00:13:29.844 "data_offset": 0, 00:13:29.844 "data_size": 65536 00:13:29.844 }, 00:13:29.844 { 00:13:29.844 "name": null, 00:13:29.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.844 "is_configured": false, 00:13:29.844 "data_offset": 0, 00:13:29.844 "data_size": 65536 00:13:29.844 }, 00:13:29.844 { 00:13:29.844 "name": "BaseBdev3", 00:13:29.844 "uuid": "ac018d88-29e0-577c-88e6-b08b502b5163", 00:13:29.844 "is_configured": true, 00:13:29.844 "data_offset": 0, 00:13:29.844 "data_size": 65536 00:13:29.844 }, 00:13:29.844 { 00:13:29.844 "name": "BaseBdev4", 00:13:29.844 "uuid": "22c9f9e1-77fe-53cc-90f5-382d7948a39b", 00:13:29.844 "is_configured": true, 00:13:29.844 "data_offset": 0, 00:13:29.844 "data_size": 65536 00:13:29.844 } 00:13:29.844 ] 00:13:29.844 }' 00:13:29.844 09:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.844 09:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.104 [2024-12-12 09:27:04.076023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.104 [2024-12-12 09:27:04.076059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.104 [2024-12-12 09:27:04.076158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.104 [2024-12-12 09:27:04.076251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.104 [2024-12-12 09:27:04.076262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.104 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:30.364 /dev/nbd0 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:30.364 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.365 1+0 records in 00:13:30.365 1+0 records out 00:13:30.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303797 s, 13.5 MB/s 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:30.365 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:30.625 /dev/nbd1 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.625 1+0 records in 00:13:30.625 1+0 records out 00:13:30.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498307 s, 8.2 MB/s 00:13:30.625 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.885 09:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.145 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78683 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78683 ']' 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78683 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78683 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78683' 00:13:31.404 killing process with pid 78683 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78683 00:13:31.404 Received shutdown signal, test time was about 60.000000 seconds 00:13:31.404 00:13:31.404 Latency(us) 00:13:31.404 [2024-12-12T09:27:05.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.404 [2024-12-12T09:27:05.427Z] =================================================================================================================== 00:13:31.404 [2024-12-12T09:27:05.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.404 [2024-12-12 09:27:05.288046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.404 09:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78683 00:13:31.972 [2024-12-12 09:27:05.801027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.352 09:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.352 00:13:33.352 real 0m17.683s 00:13:33.352 user 0m19.369s 00:13:33.352 sys 0m3.401s 00:13:33.352 09:27:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.352 09:27:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 ************************************ 00:13:33.352 END TEST raid_rebuild_test 00:13:33.352 ************************************ 00:13:33.352 09:27:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:33.352 09:27:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.352 09:27:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.352 09:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 ************************************ 00:13:33.352 START TEST raid_rebuild_test_sb 00:13:33.352 ************************************ 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79129 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79129 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79129 ']' 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.352 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.352 Zero copy mechanism will not be used. 00:13:33.352 [2024-12-12 09:27:07.188495] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:33.352 [2024-12-12 09:27:07.188643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79129 ] 00:13:33.352 [2024-12-12 09:27:07.368032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.612 [2024-12-12 09:27:07.503724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.872 [2024-12-12 09:27:07.721206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.872 [2024-12-12 09:27:07.721266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.132 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.132 BaseBdev1_malloc 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.132 [2024-12-12 09:27:08.056913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.132 [2024-12-12 09:27:08.057001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.132 [2024-12-12 09:27:08.057030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.132 [2024-12-12 09:27:08.057043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.132 [2024-12-12 09:27:08.059480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.132 [2024-12-12 09:27:08.059521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.132 BaseBdev1 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.132 BaseBdev2_malloc 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.132 [2024-12-12 09:27:08.118435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.132 [2024-12-12 09:27:08.118587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.132 [2024-12-12 09:27:08.118630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.132 [2024-12-12 09:27:08.118663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.132 [2024-12-12 09:27:08.121266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.132 [2024-12-12 09:27:08.121341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.132 BaseBdev2 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.132 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.392 BaseBdev3_malloc 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.392 [2024-12-12 09:27:08.193410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:34.392 [2024-12-12 09:27:08.193533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.392 [2024-12-12 09:27:08.193579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.392 [2024-12-12 09:27:08.193631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.392 [2024-12-12 09:27:08.196231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.392 [2024-12-12 09:27:08.196323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.392 BaseBdev3 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.392 BaseBdev4_malloc 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.392 [2024-12-12 09:27:08.256337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:34.392 [2024-12-12 09:27:08.256475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.392 [2024-12-12 09:27:08.256519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.392 [2024-12-12 09:27:08.256572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.392 [2024-12-12 09:27:08.259106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.392 [2024-12-12 09:27:08.259181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:34.392 BaseBdev4 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.392 spare_malloc 00:13:34.392 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.393 spare_delay 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.393 [2024-12-12 09:27:08.326455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.393 [2024-12-12 09:27:08.326565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.393 [2024-12-12 09:27:08.326599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:34.393 [2024-12-12 09:27:08.326629] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.393 [2024-12-12 09:27:08.329076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.393 [2024-12-12 09:27:08.329171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.393 spare 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.393 [2024-12-12 09:27:08.338486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.393 [2024-12-12 09:27:08.340612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.393 [2024-12-12 09:27:08.340747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.393 [2024-12-12 09:27:08.340826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.393 [2024-12-12 09:27:08.341086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.393 [2024-12-12 09:27:08.341140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.393 [2024-12-12 09:27:08.341416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:34.393 [2024-12-12 09:27:08.341634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.393 [2024-12-12 09:27:08.341678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.393 [2024-12-12 09:27:08.341860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.393 "name": "raid_bdev1", 00:13:34.393 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:34.393 "strip_size_kb": 0, 00:13:34.393 "state": "online", 00:13:34.393 "raid_level": "raid1", 00:13:34.393 "superblock": true, 00:13:34.393 "num_base_bdevs": 4, 00:13:34.393 "num_base_bdevs_discovered": 4, 00:13:34.393 "num_base_bdevs_operational": 4, 00:13:34.393 "base_bdevs_list": [ 00:13:34.393 { 00:13:34.393 "name": "BaseBdev1", 00:13:34.393 "uuid": "13fcb7d4-9831-513b-af9d-23e394187698", 00:13:34.393 "is_configured": true, 00:13:34.393 "data_offset": 2048, 00:13:34.393 "data_size": 63488 00:13:34.393 }, 00:13:34.393 { 00:13:34.393 "name": "BaseBdev2", 00:13:34.393 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:34.393 "is_configured": true, 00:13:34.393 "data_offset": 2048, 00:13:34.393 "data_size": 63488 00:13:34.393 }, 00:13:34.393 { 00:13:34.393 "name": "BaseBdev3", 00:13:34.393 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:34.393 "is_configured": true, 00:13:34.393 "data_offset": 2048, 00:13:34.393 "data_size": 63488 00:13:34.393 }, 00:13:34.393 { 00:13:34.393 "name": "BaseBdev4", 00:13:34.393 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:34.393 "is_configured": true, 00:13:34.393 "data_offset": 2048, 00:13:34.393 "data_size": 63488 00:13:34.393 } 00:13:34.393 ] 00:13:34.393 }' 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.393 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.962 [2024-12-12 09:27:08.742180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.962 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:35.222 [2024-12-12 09:27:09.001468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:35.222 /dev/nbd0 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.222 1+0 records in 00:13:35.222 1+0 records out 00:13:35.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288239 s, 14.2 MB/s 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:35.222 09:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:40.501 63488+0 records in 00:13:40.501 63488+0 records out 00:13:40.501 32505856 bytes (33 MB, 31 MiB) copied, 5.28373 s, 6.2 MB/s 00:13:40.501 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.502 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.762 [2024-12-12 09:27:14.542200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.762 [2024-12-12 09:27:14.574577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.762 "name": "raid_bdev1", 00:13:40.762 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:40.762 "strip_size_kb": 0, 00:13:40.762 "state": "online", 00:13:40.762 "raid_level": "raid1", 00:13:40.762 "superblock": true, 00:13:40.762 "num_base_bdevs": 4, 00:13:40.762 "num_base_bdevs_discovered": 3, 00:13:40.762 "num_base_bdevs_operational": 3, 00:13:40.762 "base_bdevs_list": [ 00:13:40.762 { 00:13:40.762 "name": null, 00:13:40.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.762 "is_configured": false, 00:13:40.762 "data_offset": 0, 00:13:40.762 "data_size": 63488 00:13:40.762 }, 00:13:40.762 { 00:13:40.762 "name": "BaseBdev2", 00:13:40.762 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:40.762 "is_configured": true, 00:13:40.762 "data_offset": 2048, 00:13:40.762 "data_size": 63488 00:13:40.762 }, 00:13:40.762 { 00:13:40.762 "name": "BaseBdev3", 00:13:40.762 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:40.762 "is_configured": true, 00:13:40.762 "data_offset": 2048, 00:13:40.762 "data_size": 63488 00:13:40.762 }, 00:13:40.762 { 00:13:40.762 "name": "BaseBdev4", 00:13:40.762 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:40.762 "is_configured": true, 00:13:40.762 "data_offset": 2048, 00:13:40.762 "data_size": 63488 00:13:40.762 } 00:13:40.762 ] 00:13:40.762 }' 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.762 09:27:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.345 09:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:41.345 09:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.345 09:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.345 [2024-12-12 09:27:15.053772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.345 [2024-12-12 09:27:15.069101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:41.345 09:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.345 09:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:41.345 [2024-12-12 09:27:15.071236] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.284 "name": "raid_bdev1", 00:13:42.284 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:42.284 "strip_size_kb": 0, 00:13:42.284 "state": "online", 00:13:42.284 "raid_level": "raid1", 00:13:42.284 "superblock": true, 00:13:42.284 "num_base_bdevs": 4, 00:13:42.284 "num_base_bdevs_discovered": 4, 00:13:42.284 "num_base_bdevs_operational": 4, 00:13:42.284 "process": { 00:13:42.284 "type": "rebuild", 00:13:42.284 "target": "spare", 00:13:42.284 "progress": { 00:13:42.284 "blocks": 20480, 00:13:42.284 "percent": 32 00:13:42.284 } 00:13:42.284 }, 00:13:42.284 "base_bdevs_list": [ 00:13:42.284 { 00:13:42.284 "name": "spare", 00:13:42.284 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:42.284 "is_configured": true, 00:13:42.284 "data_offset": 2048, 00:13:42.284 "data_size": 63488 00:13:42.284 }, 00:13:42.284 { 00:13:42.284 "name": "BaseBdev2", 00:13:42.284 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:42.284 "is_configured": true, 00:13:42.284 "data_offset": 2048, 00:13:42.284 "data_size": 63488 00:13:42.284 }, 00:13:42.284 { 00:13:42.284 "name": "BaseBdev3", 00:13:42.284 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:42.284 "is_configured": true, 00:13:42.284 "data_offset": 2048, 00:13:42.284 "data_size": 63488 00:13:42.284 }, 00:13:42.284 { 00:13:42.284 "name": "BaseBdev4", 00:13:42.284 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:42.284 "is_configured": true, 00:13:42.284 "data_offset": 2048, 00:13:42.284 "data_size": 63488 00:13:42.284 } 00:13:42.284 ] 00:13:42.284 }' 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.284 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.284 [2024-12-12 09:27:16.218403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.284 [2024-12-12 09:27:16.279987] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:42.284 [2024-12-12 09:27:16.280055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.284 [2024-12-12 09:27:16.280074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.284 [2024-12-12 09:27:16.280085] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.545 "name": "raid_bdev1", 00:13:42.545 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:42.545 "strip_size_kb": 0, 00:13:42.545 "state": "online", 00:13:42.545 "raid_level": "raid1", 00:13:42.545 "superblock": true, 00:13:42.545 "num_base_bdevs": 4, 00:13:42.545 "num_base_bdevs_discovered": 3, 00:13:42.545 "num_base_bdevs_operational": 3, 00:13:42.545 "base_bdevs_list": [ 00:13:42.545 { 00:13:42.545 "name": null, 00:13:42.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.545 "is_configured": false, 00:13:42.545 "data_offset": 0, 00:13:42.545 "data_size": 63488 00:13:42.545 }, 00:13:42.545 { 00:13:42.545 "name": "BaseBdev2", 00:13:42.545 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:42.545 "is_configured": true, 00:13:42.545 "data_offset": 2048, 00:13:42.545 "data_size": 63488 00:13:42.545 }, 00:13:42.545 { 00:13:42.545 "name": "BaseBdev3", 00:13:42.545 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:42.545 "is_configured": true, 00:13:42.545 "data_offset": 2048, 00:13:42.545 "data_size": 63488 00:13:42.545 }, 00:13:42.545 { 00:13:42.545 "name": "BaseBdev4", 00:13:42.545 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:42.545 "is_configured": true, 00:13:42.545 "data_offset": 2048, 00:13:42.545 "data_size": 63488 00:13:42.545 } 00:13:42.545 ] 00:13:42.545 }' 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.545 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.805 "name": "raid_bdev1", 00:13:42.805 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:42.805 "strip_size_kb": 0, 00:13:42.805 "state": "online", 00:13:42.805 "raid_level": "raid1", 00:13:42.805 "superblock": true, 00:13:42.805 "num_base_bdevs": 4, 00:13:42.805 "num_base_bdevs_discovered": 3, 00:13:42.805 "num_base_bdevs_operational": 3, 00:13:42.805 "base_bdevs_list": [ 00:13:42.805 { 00:13:42.805 "name": null, 00:13:42.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.805 "is_configured": false, 00:13:42.805 "data_offset": 0, 00:13:42.805 "data_size": 63488 00:13:42.805 }, 00:13:42.805 { 00:13:42.805 "name": "BaseBdev2", 00:13:42.805 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:42.805 "is_configured": true, 00:13:42.805 "data_offset": 2048, 00:13:42.805 "data_size": 63488 00:13:42.805 }, 00:13:42.805 { 00:13:42.805 "name": "BaseBdev3", 00:13:42.805 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:42.805 "is_configured": true, 00:13:42.805 "data_offset": 2048, 00:13:42.805 "data_size": 63488 00:13:42.805 }, 00:13:42.805 { 00:13:42.805 "name": "BaseBdev4", 00:13:42.805 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:42.805 "is_configured": true, 00:13:42.805 "data_offset": 2048, 00:13:42.805 "data_size": 63488 00:13:42.805 } 00:13:42.805 ] 00:13:42.805 }' 00:13:42.805 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.065 [2024-12-12 09:27:16.895105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.065 [2024-12-12 09:27:16.910085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.065 09:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:43.065 [2024-12-12 09:27:16.912301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.004 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.004 "name": "raid_bdev1", 00:13:44.004 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:44.004 "strip_size_kb": 0, 00:13:44.004 "state": "online", 00:13:44.004 "raid_level": "raid1", 00:13:44.004 "superblock": true, 00:13:44.004 "num_base_bdevs": 4, 00:13:44.004 "num_base_bdevs_discovered": 4, 00:13:44.004 "num_base_bdevs_operational": 4, 00:13:44.004 "process": { 00:13:44.004 "type": "rebuild", 00:13:44.004 "target": "spare", 00:13:44.004 "progress": { 00:13:44.004 "blocks": 20480, 00:13:44.004 "percent": 32 00:13:44.004 } 00:13:44.004 }, 00:13:44.004 "base_bdevs_list": [ 00:13:44.004 { 00:13:44.004 "name": "spare", 00:13:44.004 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:44.004 "is_configured": true, 00:13:44.004 "data_offset": 2048, 00:13:44.004 "data_size": 63488 00:13:44.004 }, 00:13:44.004 { 00:13:44.004 "name": "BaseBdev2", 00:13:44.004 "uuid": "a39c0457-0046-528a-8172-9386941c27bb", 00:13:44.004 "is_configured": true, 00:13:44.004 "data_offset": 2048, 00:13:44.004 "data_size": 63488 00:13:44.004 }, 00:13:44.004 { 00:13:44.004 "name": "BaseBdev3", 00:13:44.004 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:44.004 "is_configured": true, 00:13:44.004 "data_offset": 2048, 00:13:44.004 "data_size": 63488 00:13:44.004 }, 00:13:44.004 { 00:13:44.004 "name": "BaseBdev4", 00:13:44.004 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:44.004 "is_configured": true, 00:13:44.005 "data_offset": 2048, 00:13:44.005 "data_size": 63488 00:13:44.005 } 00:13:44.005 ] 00:13:44.005 }' 00:13:44.005 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.005 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.005 09:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:44.264 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.264 [2024-12-12 09:27:18.052429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.264 [2024-12-12 09:27:18.220996] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.264 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.264 "name": "raid_bdev1", 00:13:44.264 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:44.265 "strip_size_kb": 0, 00:13:44.265 "state": "online", 00:13:44.265 "raid_level": "raid1", 00:13:44.265 "superblock": true, 00:13:44.265 "num_base_bdevs": 4, 00:13:44.265 "num_base_bdevs_discovered": 3, 00:13:44.265 "num_base_bdevs_operational": 3, 00:13:44.265 "process": { 00:13:44.265 "type": "rebuild", 00:13:44.265 "target": "spare", 00:13:44.265 "progress": { 00:13:44.265 "blocks": 24576, 00:13:44.265 "percent": 38 00:13:44.265 } 00:13:44.265 }, 00:13:44.265 "base_bdevs_list": [ 00:13:44.265 { 00:13:44.265 "name": "spare", 00:13:44.265 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:44.265 "is_configured": true, 00:13:44.265 "data_offset": 2048, 00:13:44.265 "data_size": 63488 00:13:44.265 }, 00:13:44.265 { 00:13:44.265 "name": null, 00:13:44.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.265 "is_configured": false, 00:13:44.265 "data_offset": 0, 00:13:44.265 "data_size": 63488 00:13:44.265 }, 00:13:44.265 { 00:13:44.265 "name": "BaseBdev3", 00:13:44.265 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:44.265 "is_configured": true, 00:13:44.265 "data_offset": 2048, 00:13:44.265 "data_size": 63488 00:13:44.265 }, 00:13:44.265 { 00:13:44.265 "name": "BaseBdev4", 00:13:44.265 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:44.265 "is_configured": true, 00:13:44.265 "data_offset": 2048, 00:13:44.265 "data_size": 63488 00:13:44.265 } 00:13:44.265 ] 00:13:44.265 }' 00:13:44.265 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=464 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.524 "name": "raid_bdev1", 00:13:44.524 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:44.524 "strip_size_kb": 0, 00:13:44.524 "state": "online", 00:13:44.524 "raid_level": "raid1", 00:13:44.524 "superblock": true, 00:13:44.524 "num_base_bdevs": 4, 00:13:44.524 "num_base_bdevs_discovered": 3, 00:13:44.524 "num_base_bdevs_operational": 3, 00:13:44.524 "process": { 00:13:44.524 "type": "rebuild", 00:13:44.524 "target": "spare", 00:13:44.524 "progress": { 00:13:44.524 "blocks": 26624, 00:13:44.524 "percent": 41 00:13:44.524 } 00:13:44.524 }, 00:13:44.524 "base_bdevs_list": [ 00:13:44.524 { 00:13:44.524 "name": "spare", 00:13:44.524 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:44.524 "is_configured": true, 00:13:44.524 "data_offset": 2048, 00:13:44.524 "data_size": 63488 00:13:44.524 }, 00:13:44.524 { 00:13:44.524 "name": null, 00:13:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.524 "is_configured": false, 00:13:44.524 "data_offset": 0, 00:13:44.524 "data_size": 63488 00:13:44.524 }, 00:13:44.524 { 00:13:44.524 "name": "BaseBdev3", 00:13:44.524 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:44.524 "is_configured": true, 00:13:44.524 "data_offset": 2048, 00:13:44.524 "data_size": 63488 00:13:44.524 }, 00:13:44.524 { 00:13:44.524 "name": "BaseBdev4", 00:13:44.524 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:44.524 "is_configured": true, 00:13:44.524 "data_offset": 2048, 00:13:44.524 "data_size": 63488 00:13:44.524 } 00:13:44.524 ] 00:13:44.524 }' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.524 09:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.907 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.908 "name": "raid_bdev1", 00:13:45.908 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:45.908 "strip_size_kb": 0, 00:13:45.908 "state": "online", 00:13:45.908 "raid_level": "raid1", 00:13:45.908 "superblock": true, 00:13:45.908 "num_base_bdevs": 4, 00:13:45.908 "num_base_bdevs_discovered": 3, 00:13:45.908 "num_base_bdevs_operational": 3, 00:13:45.908 "process": { 00:13:45.908 "type": "rebuild", 00:13:45.908 "target": "spare", 00:13:45.908 "progress": { 00:13:45.908 "blocks": 49152, 00:13:45.908 "percent": 77 00:13:45.908 } 00:13:45.908 }, 00:13:45.908 "base_bdevs_list": [ 00:13:45.908 { 00:13:45.908 "name": "spare", 00:13:45.908 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": null, 00:13:45.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.908 "is_configured": false, 00:13:45.908 "data_offset": 0, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": "BaseBdev3", 00:13:45.908 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": "BaseBdev4", 00:13:45.908 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 } 00:13:45.908 ] 00:13:45.908 }' 00:13:45.908 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.908 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.908 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.908 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.908 09:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.168 [2024-12-12 09:27:20.135453] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:46.168 [2024-12-12 09:27:20.135539] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:46.168 [2024-12-12 09:27:20.135689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.738 "name": "raid_bdev1", 00:13:46.738 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:46.738 "strip_size_kb": 0, 00:13:46.738 "state": "online", 00:13:46.738 "raid_level": "raid1", 00:13:46.738 "superblock": true, 00:13:46.738 "num_base_bdevs": 4, 00:13:46.738 "num_base_bdevs_discovered": 3, 00:13:46.738 "num_base_bdevs_operational": 3, 00:13:46.738 "base_bdevs_list": [ 00:13:46.738 { 00:13:46.738 "name": "spare", 00:13:46.738 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:46.738 "is_configured": true, 00:13:46.738 "data_offset": 2048, 00:13:46.738 "data_size": 63488 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "name": null, 00:13:46.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.738 "is_configured": false, 00:13:46.738 "data_offset": 0, 00:13:46.738 "data_size": 63488 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "name": "BaseBdev3", 00:13:46.738 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:46.738 "is_configured": true, 00:13:46.738 "data_offset": 2048, 00:13:46.738 "data_size": 63488 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "name": "BaseBdev4", 00:13:46.738 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:46.738 "is_configured": true, 00:13:46.738 "data_offset": 2048, 00:13:46.738 "data_size": 63488 00:13:46.738 } 00:13:46.738 ] 00:13:46.738 }' 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:46.738 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.998 "name": "raid_bdev1", 00:13:46.998 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:46.998 "strip_size_kb": 0, 00:13:46.998 "state": "online", 00:13:46.998 "raid_level": "raid1", 00:13:46.998 "superblock": true, 00:13:46.998 "num_base_bdevs": 4, 00:13:46.998 "num_base_bdevs_discovered": 3, 00:13:46.998 "num_base_bdevs_operational": 3, 00:13:46.998 "base_bdevs_list": [ 00:13:46.998 { 00:13:46.998 "name": "spare", 00:13:46.998 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": null, 00:13:46.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.998 "is_configured": false, 00:13:46.998 "data_offset": 0, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": "BaseBdev3", 00:13:46.998 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": "BaseBdev4", 00:13:46.998 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 } 00:13:46.998 ] 00:13:46.998 }' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.998 "name": "raid_bdev1", 00:13:46.998 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:46.998 "strip_size_kb": 0, 00:13:46.998 "state": "online", 00:13:46.998 "raid_level": "raid1", 00:13:46.998 "superblock": true, 00:13:46.998 "num_base_bdevs": 4, 00:13:46.998 "num_base_bdevs_discovered": 3, 00:13:46.998 "num_base_bdevs_operational": 3, 00:13:46.998 "base_bdevs_list": [ 00:13:46.998 { 00:13:46.998 "name": "spare", 00:13:46.998 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": null, 00:13:46.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.998 "is_configured": false, 00:13:46.998 "data_offset": 0, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": "BaseBdev3", 00:13:46.998 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 }, 00:13:46.998 { 00:13:46.998 "name": "BaseBdev4", 00:13:46.998 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:46.998 "is_configured": true, 00:13:46.998 "data_offset": 2048, 00:13:46.998 "data_size": 63488 00:13:46.998 } 00:13:46.998 ] 00:13:46.998 }' 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.998 09:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.568 [2024-12-12 09:27:21.325918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.568 [2024-12-12 09:27:21.326019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.568 [2024-12-12 09:27:21.326141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.568 [2024-12-12 09:27:21.326268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.568 [2024-12-12 09:27:21.326314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:47.568 /dev/nbd0 00:13:47.568 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.828 1+0 records in 00:13:47.828 1+0 records out 00:13:47.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597867 s, 6.9 MB/s 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:47.828 /dev/nbd1 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:47.828 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.089 1+0 records in 00:13:48.089 1+0 records out 00:13:48.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532373 s, 7.7 MB/s 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.089 09:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.089 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.349 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.609 [2024-12-12 09:27:22.466722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.609 [2024-12-12 09:27:22.466778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.609 [2024-12-12 09:27:22.466803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:48.609 [2024-12-12 09:27:22.466812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.609 [2024-12-12 09:27:22.469319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.609 [2024-12-12 09:27:22.469357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.609 [2024-12-12 09:27:22.469454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.609 [2024-12-12 09:27:22.469504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.609 [2024-12-12 09:27:22.469663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.609 [2024-12-12 09:27:22.469751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.609 spare 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.609 [2024-12-12 09:27:22.569641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:48.609 [2024-12-12 09:27:22.569665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.609 [2024-12-12 09:27:22.569952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:48.609 [2024-12-12 09:27:22.570172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:48.609 [2024-12-12 09:27:22.570185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:48.609 [2024-12-12 09:27:22.570361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.609 "name": "raid_bdev1", 00:13:48.609 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:48.609 "strip_size_kb": 0, 00:13:48.609 "state": "online", 00:13:48.609 "raid_level": "raid1", 00:13:48.609 "superblock": true, 00:13:48.609 "num_base_bdevs": 4, 00:13:48.609 "num_base_bdevs_discovered": 3, 00:13:48.609 "num_base_bdevs_operational": 3, 00:13:48.609 "base_bdevs_list": [ 00:13:48.609 { 00:13:48.609 "name": "spare", 00:13:48.609 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:48.609 "is_configured": true, 00:13:48.609 "data_offset": 2048, 00:13:48.609 "data_size": 63488 00:13:48.609 }, 00:13:48.609 { 00:13:48.609 "name": null, 00:13:48.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.609 "is_configured": false, 00:13:48.609 "data_offset": 2048, 00:13:48.609 "data_size": 63488 00:13:48.609 }, 00:13:48.609 { 00:13:48.609 "name": "BaseBdev3", 00:13:48.609 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:48.609 "is_configured": true, 00:13:48.609 "data_offset": 2048, 00:13:48.609 "data_size": 63488 00:13:48.609 }, 00:13:48.609 { 00:13:48.609 "name": "BaseBdev4", 00:13:48.609 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:48.609 "is_configured": true, 00:13:48.609 "data_offset": 2048, 00:13:48.609 "data_size": 63488 00:13:48.609 } 00:13:48.609 ] 00:13:48.609 }' 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.609 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.180 "name": "raid_bdev1", 00:13:49.180 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:49.180 "strip_size_kb": 0, 00:13:49.180 "state": "online", 00:13:49.180 "raid_level": "raid1", 00:13:49.180 "superblock": true, 00:13:49.180 "num_base_bdevs": 4, 00:13:49.180 "num_base_bdevs_discovered": 3, 00:13:49.180 "num_base_bdevs_operational": 3, 00:13:49.180 "base_bdevs_list": [ 00:13:49.180 { 00:13:49.180 "name": "spare", 00:13:49.180 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:49.180 "is_configured": true, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": null, 00:13:49.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.180 "is_configured": false, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": "BaseBdev3", 00:13:49.180 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:49.180 "is_configured": true, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": "BaseBdev4", 00:13:49.180 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:49.180 "is_configured": true, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 } 00:13:49.180 ] 00:13:49.180 }' 00:13:49.180 09:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 [2024-12-12 09:27:23.125691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.180 "name": "raid_bdev1", 00:13:49.180 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:49.180 "strip_size_kb": 0, 00:13:49.180 "state": "online", 00:13:49.180 "raid_level": "raid1", 00:13:49.180 "superblock": true, 00:13:49.180 "num_base_bdevs": 4, 00:13:49.180 "num_base_bdevs_discovered": 2, 00:13:49.180 "num_base_bdevs_operational": 2, 00:13:49.180 "base_bdevs_list": [ 00:13:49.180 { 00:13:49.180 "name": null, 00:13:49.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.180 "is_configured": false, 00:13:49.180 "data_offset": 0, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": null, 00:13:49.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.180 "is_configured": false, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": "BaseBdev3", 00:13:49.180 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:49.180 "is_configured": true, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 }, 00:13:49.180 { 00:13:49.180 "name": "BaseBdev4", 00:13:49.180 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:49.180 "is_configured": true, 00:13:49.180 "data_offset": 2048, 00:13:49.180 "data_size": 63488 00:13:49.180 } 00:13:49.180 ] 00:13:49.180 }' 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.180 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.750 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.750 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 [2024-12-12 09:27:23.588911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.750 [2024-12-12 09:27:23.589182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:49.750 [2024-12-12 09:27:23.589199] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:49.750 [2024-12-12 09:27:23.589241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.750 [2024-12-12 09:27:23.603475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:49.750 09:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.750 09:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:49.750 [2024-12-12 09:27:23.605684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.690 "name": "raid_bdev1", 00:13:50.690 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:50.690 "strip_size_kb": 0, 00:13:50.690 "state": "online", 00:13:50.690 "raid_level": "raid1", 00:13:50.690 "superblock": true, 00:13:50.690 "num_base_bdevs": 4, 00:13:50.690 "num_base_bdevs_discovered": 3, 00:13:50.690 "num_base_bdevs_operational": 3, 00:13:50.690 "process": { 00:13:50.690 "type": "rebuild", 00:13:50.690 "target": "spare", 00:13:50.690 "progress": { 00:13:50.690 "blocks": 20480, 00:13:50.690 "percent": 32 00:13:50.690 } 00:13:50.690 }, 00:13:50.690 "base_bdevs_list": [ 00:13:50.690 { 00:13:50.690 "name": "spare", 00:13:50.690 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:50.690 "is_configured": true, 00:13:50.690 "data_offset": 2048, 00:13:50.690 "data_size": 63488 00:13:50.690 }, 00:13:50.690 { 00:13:50.690 "name": null, 00:13:50.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.690 "is_configured": false, 00:13:50.690 "data_offset": 2048, 00:13:50.690 "data_size": 63488 00:13:50.690 }, 00:13:50.690 { 00:13:50.690 "name": "BaseBdev3", 00:13:50.690 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:50.690 "is_configured": true, 00:13:50.690 "data_offset": 2048, 00:13:50.690 "data_size": 63488 00:13:50.690 }, 00:13:50.690 { 00:13:50.690 "name": "BaseBdev4", 00:13:50.690 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:50.690 "is_configured": true, 00:13:50.690 "data_offset": 2048, 00:13:50.690 "data_size": 63488 00:13:50.690 } 00:13:50.690 ] 00:13:50.690 }' 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.690 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.950 [2024-12-12 09:27:24.764913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.950 [2024-12-12 09:27:24.814478] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.950 [2024-12-12 09:27:24.814533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.950 [2024-12-12 09:27:24.814553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.950 [2024-12-12 09:27:24.814560] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.950 "name": "raid_bdev1", 00:13:50.950 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:50.950 "strip_size_kb": 0, 00:13:50.950 "state": "online", 00:13:50.950 "raid_level": "raid1", 00:13:50.950 "superblock": true, 00:13:50.950 "num_base_bdevs": 4, 00:13:50.950 "num_base_bdevs_discovered": 2, 00:13:50.950 "num_base_bdevs_operational": 2, 00:13:50.950 "base_bdevs_list": [ 00:13:50.950 { 00:13:50.950 "name": null, 00:13:50.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.950 "is_configured": false, 00:13:50.950 "data_offset": 0, 00:13:50.950 "data_size": 63488 00:13:50.950 }, 00:13:50.950 { 00:13:50.950 "name": null, 00:13:50.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.950 "is_configured": false, 00:13:50.950 "data_offset": 2048, 00:13:50.950 "data_size": 63488 00:13:50.950 }, 00:13:50.950 { 00:13:50.950 "name": "BaseBdev3", 00:13:50.950 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:50.950 "is_configured": true, 00:13:50.950 "data_offset": 2048, 00:13:50.950 "data_size": 63488 00:13:50.950 }, 00:13:50.950 { 00:13:50.950 "name": "BaseBdev4", 00:13:50.950 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:50.950 "is_configured": true, 00:13:50.950 "data_offset": 2048, 00:13:50.950 "data_size": 63488 00:13:50.950 } 00:13:50.950 ] 00:13:50.950 }' 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.950 09:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.520 09:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.520 09:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.520 09:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.520 [2024-12-12 09:27:25.289384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.520 [2024-12-12 09:27:25.289486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.520 [2024-12-12 09:27:25.289537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:51.520 [2024-12-12 09:27:25.289567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.520 [2024-12-12 09:27:25.290146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.520 [2024-12-12 09:27:25.290206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.520 [2024-12-12 09:27:25.290325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:51.520 [2024-12-12 09:27:25.290365] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:51.520 [2024-12-12 09:27:25.290410] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.520 [2024-12-12 09:27:25.290457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.520 [2024-12-12 09:27:25.304431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:51.520 spare 00:13:51.520 09:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.520 09:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:51.520 [2024-12-12 09:27:25.306617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.459 "name": "raid_bdev1", 00:13:52.459 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:52.459 "strip_size_kb": 0, 00:13:52.459 "state": "online", 00:13:52.459 "raid_level": "raid1", 00:13:52.459 "superblock": true, 00:13:52.459 "num_base_bdevs": 4, 00:13:52.459 "num_base_bdevs_discovered": 3, 00:13:52.459 "num_base_bdevs_operational": 3, 00:13:52.459 "process": { 00:13:52.459 "type": "rebuild", 00:13:52.459 "target": "spare", 00:13:52.459 "progress": { 00:13:52.459 "blocks": 20480, 00:13:52.459 "percent": 32 00:13:52.459 } 00:13:52.459 }, 00:13:52.459 "base_bdevs_list": [ 00:13:52.459 { 00:13:52.459 "name": "spare", 00:13:52.459 "uuid": "7f243487-55cc-5461-b53a-9d92eb7cd891", 00:13:52.459 "is_configured": true, 00:13:52.459 "data_offset": 2048, 00:13:52.459 "data_size": 63488 00:13:52.459 }, 00:13:52.459 { 00:13:52.459 "name": null, 00:13:52.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.459 "is_configured": false, 00:13:52.459 "data_offset": 2048, 00:13:52.459 "data_size": 63488 00:13:52.459 }, 00:13:52.459 { 00:13:52.459 "name": "BaseBdev3", 00:13:52.459 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:52.459 "is_configured": true, 00:13:52.459 "data_offset": 2048, 00:13:52.459 "data_size": 63488 00:13:52.459 }, 00:13:52.459 { 00:13:52.459 "name": "BaseBdev4", 00:13:52.459 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:52.459 "is_configured": true, 00:13:52.459 "data_offset": 2048, 00:13:52.459 "data_size": 63488 00:13:52.459 } 00:13:52.459 ] 00:13:52.459 }' 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.460 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.460 [2024-12-12 09:27:26.450614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.719 [2024-12-12 09:27:26.515267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.720 [2024-12-12 09:27:26.515330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.720 [2024-12-12 09:27:26.515345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.720 [2024-12-12 09:27:26.515355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.720 "name": "raid_bdev1", 00:13:52.720 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:52.720 "strip_size_kb": 0, 00:13:52.720 "state": "online", 00:13:52.720 "raid_level": "raid1", 00:13:52.720 "superblock": true, 00:13:52.720 "num_base_bdevs": 4, 00:13:52.720 "num_base_bdevs_discovered": 2, 00:13:52.720 "num_base_bdevs_operational": 2, 00:13:52.720 "base_bdevs_list": [ 00:13:52.720 { 00:13:52.720 "name": null, 00:13:52.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.720 "is_configured": false, 00:13:52.720 "data_offset": 0, 00:13:52.720 "data_size": 63488 00:13:52.720 }, 00:13:52.720 { 00:13:52.720 "name": null, 00:13:52.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.720 "is_configured": false, 00:13:52.720 "data_offset": 2048, 00:13:52.720 "data_size": 63488 00:13:52.720 }, 00:13:52.720 { 00:13:52.720 "name": "BaseBdev3", 00:13:52.720 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:52.720 "is_configured": true, 00:13:52.720 "data_offset": 2048, 00:13:52.720 "data_size": 63488 00:13:52.720 }, 00:13:52.720 { 00:13:52.720 "name": "BaseBdev4", 00:13:52.720 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:52.720 "is_configured": true, 00:13:52.720 "data_offset": 2048, 00:13:52.720 "data_size": 63488 00:13:52.720 } 00:13:52.720 ] 00:13:52.720 }' 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.720 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.980 09:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.239 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.239 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.239 "name": "raid_bdev1", 00:13:53.239 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:53.239 "strip_size_kb": 0, 00:13:53.239 "state": "online", 00:13:53.239 "raid_level": "raid1", 00:13:53.239 "superblock": true, 00:13:53.239 "num_base_bdevs": 4, 00:13:53.239 "num_base_bdevs_discovered": 2, 00:13:53.239 "num_base_bdevs_operational": 2, 00:13:53.239 "base_bdevs_list": [ 00:13:53.239 { 00:13:53.239 "name": null, 00:13:53.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.239 "is_configured": false, 00:13:53.239 "data_offset": 0, 00:13:53.239 "data_size": 63488 00:13:53.239 }, 00:13:53.239 { 00:13:53.239 "name": null, 00:13:53.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.240 "is_configured": false, 00:13:53.240 "data_offset": 2048, 00:13:53.240 "data_size": 63488 00:13:53.240 }, 00:13:53.240 { 00:13:53.240 "name": "BaseBdev3", 00:13:53.240 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:53.240 "is_configured": true, 00:13:53.240 "data_offset": 2048, 00:13:53.240 "data_size": 63488 00:13:53.240 }, 00:13:53.240 { 00:13:53.240 "name": "BaseBdev4", 00:13:53.240 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:53.240 "is_configured": true, 00:13:53.240 "data_offset": 2048, 00:13:53.240 "data_size": 63488 00:13:53.240 } 00:13:53.240 ] 00:13:53.240 }' 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.240 [2024-12-12 09:27:27.141430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.240 [2024-12-12 09:27:27.141544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.240 [2024-12-12 09:27:27.141572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:53.240 [2024-12-12 09:27:27.141584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.240 [2024-12-12 09:27:27.142126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.240 [2024-12-12 09:27:27.142150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.240 [2024-12-12 09:27:27.142228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:53.240 [2024-12-12 09:27:27.142244] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:53.240 [2024-12-12 09:27:27.142252] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:53.240 [2024-12-12 09:27:27.142279] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:53.240 BaseBdev1 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.240 09:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.215 "name": "raid_bdev1", 00:13:54.215 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:54.215 "strip_size_kb": 0, 00:13:54.215 "state": "online", 00:13:54.215 "raid_level": "raid1", 00:13:54.215 "superblock": true, 00:13:54.215 "num_base_bdevs": 4, 00:13:54.215 "num_base_bdevs_discovered": 2, 00:13:54.215 "num_base_bdevs_operational": 2, 00:13:54.215 "base_bdevs_list": [ 00:13:54.215 { 00:13:54.215 "name": null, 00:13:54.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.215 "is_configured": false, 00:13:54.215 "data_offset": 0, 00:13:54.215 "data_size": 63488 00:13:54.215 }, 00:13:54.215 { 00:13:54.215 "name": null, 00:13:54.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.215 "is_configured": false, 00:13:54.215 "data_offset": 2048, 00:13:54.215 "data_size": 63488 00:13:54.215 }, 00:13:54.215 { 00:13:54.215 "name": "BaseBdev3", 00:13:54.215 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:54.215 "is_configured": true, 00:13:54.215 "data_offset": 2048, 00:13:54.215 "data_size": 63488 00:13:54.215 }, 00:13:54.215 { 00:13:54.215 "name": "BaseBdev4", 00:13:54.215 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:54.215 "is_configured": true, 00:13:54.215 "data_offset": 2048, 00:13:54.215 "data_size": 63488 00:13:54.215 } 00:13:54.215 ] 00:13:54.215 }' 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.215 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.786 "name": "raid_bdev1", 00:13:54.786 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:54.786 "strip_size_kb": 0, 00:13:54.786 "state": "online", 00:13:54.786 "raid_level": "raid1", 00:13:54.786 "superblock": true, 00:13:54.786 "num_base_bdevs": 4, 00:13:54.786 "num_base_bdevs_discovered": 2, 00:13:54.786 "num_base_bdevs_operational": 2, 00:13:54.786 "base_bdevs_list": [ 00:13:54.786 { 00:13:54.786 "name": null, 00:13:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.786 "is_configured": false, 00:13:54.786 "data_offset": 0, 00:13:54.786 "data_size": 63488 00:13:54.786 }, 00:13:54.786 { 00:13:54.786 "name": null, 00:13:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.786 "is_configured": false, 00:13:54.786 "data_offset": 2048, 00:13:54.786 "data_size": 63488 00:13:54.786 }, 00:13:54.786 { 00:13:54.786 "name": "BaseBdev3", 00:13:54.786 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:54.786 "is_configured": true, 00:13:54.786 "data_offset": 2048, 00:13:54.786 "data_size": 63488 00:13:54.786 }, 00:13:54.786 { 00:13:54.786 "name": "BaseBdev4", 00:13:54.786 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:54.786 "is_configured": true, 00:13:54.786 "data_offset": 2048, 00:13:54.786 "data_size": 63488 00:13:54.786 } 00:13:54.786 ] 00:13:54.786 }' 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.786 [2024-12-12 09:27:28.755082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.786 [2024-12-12 09:27:28.755431] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:54.786 [2024-12-12 09:27:28.755497] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:54.786 request: 00:13:54.786 { 00:13:54.786 "base_bdev": "BaseBdev1", 00:13:54.786 "raid_bdev": "raid_bdev1", 00:13:54.786 "method": "bdev_raid_add_base_bdev", 00:13:54.786 "req_id": 1 00:13:54.786 } 00:13:54.786 Got JSON-RPC error response 00:13:54.786 response: 00:13:54.786 { 00:13:54.786 "code": -22, 00:13:54.786 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:54.786 } 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.786 09:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.166 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.166 "name": "raid_bdev1", 00:13:56.166 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:56.166 "strip_size_kb": 0, 00:13:56.166 "state": "online", 00:13:56.166 "raid_level": "raid1", 00:13:56.166 "superblock": true, 00:13:56.166 "num_base_bdevs": 4, 00:13:56.166 "num_base_bdevs_discovered": 2, 00:13:56.166 "num_base_bdevs_operational": 2, 00:13:56.166 "base_bdevs_list": [ 00:13:56.166 { 00:13:56.166 "name": null, 00:13:56.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.166 "is_configured": false, 00:13:56.166 "data_offset": 0, 00:13:56.166 "data_size": 63488 00:13:56.166 }, 00:13:56.166 { 00:13:56.166 "name": null, 00:13:56.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.167 "is_configured": false, 00:13:56.167 "data_offset": 2048, 00:13:56.167 "data_size": 63488 00:13:56.167 }, 00:13:56.167 { 00:13:56.167 "name": "BaseBdev3", 00:13:56.167 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:56.167 "is_configured": true, 00:13:56.167 "data_offset": 2048, 00:13:56.167 "data_size": 63488 00:13:56.167 }, 00:13:56.167 { 00:13:56.167 "name": "BaseBdev4", 00:13:56.167 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:56.167 "is_configured": true, 00:13:56.167 "data_offset": 2048, 00:13:56.167 "data_size": 63488 00:13:56.167 } 00:13:56.167 ] 00:13:56.167 }' 00:13:56.167 09:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.167 09:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.427 "name": "raid_bdev1", 00:13:56.427 "uuid": "1ccfea75-2544-4894-ac16-1e6b0993e17f", 00:13:56.427 "strip_size_kb": 0, 00:13:56.427 "state": "online", 00:13:56.427 "raid_level": "raid1", 00:13:56.427 "superblock": true, 00:13:56.427 "num_base_bdevs": 4, 00:13:56.427 "num_base_bdevs_discovered": 2, 00:13:56.427 "num_base_bdevs_operational": 2, 00:13:56.427 "base_bdevs_list": [ 00:13:56.427 { 00:13:56.427 "name": null, 00:13:56.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.427 "is_configured": false, 00:13:56.427 "data_offset": 0, 00:13:56.427 "data_size": 63488 00:13:56.427 }, 00:13:56.427 { 00:13:56.427 "name": null, 00:13:56.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.427 "is_configured": false, 00:13:56.427 "data_offset": 2048, 00:13:56.427 "data_size": 63488 00:13:56.427 }, 00:13:56.427 { 00:13:56.427 "name": "BaseBdev3", 00:13:56.427 "uuid": "5670eece-b1d7-5793-a469-de4811d38df7", 00:13:56.427 "is_configured": true, 00:13:56.427 "data_offset": 2048, 00:13:56.427 "data_size": 63488 00:13:56.427 }, 00:13:56.427 { 00:13:56.427 "name": "BaseBdev4", 00:13:56.427 "uuid": "db1f921a-686f-5ecf-92e7-33329785960a", 00:13:56.427 "is_configured": true, 00:13:56.427 "data_offset": 2048, 00:13:56.427 "data_size": 63488 00:13:56.427 } 00:13:56.427 ] 00:13:56.427 }' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79129 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79129 ']' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79129 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79129 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79129' 00:13:56.427 killing process with pid 79129 00:13:56.427 Received shutdown signal, test time was about 60.000000 seconds 00:13:56.427 00:13:56.427 Latency(us) 00:13:56.427 [2024-12-12T09:27:30.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.427 [2024-12-12T09:27:30.450Z] =================================================================================================================== 00:13:56.427 [2024-12-12T09:27:30.450Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79129 00:13:56.427 [2024-12-12 09:27:30.403139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.427 [2024-12-12 09:27:30.403264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.427 09:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79129 00:13:56.427 [2024-12-12 09:27:30.403336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.427 [2024-12-12 09:27:30.403346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:56.997 [2024-12-12 09:27:30.909438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.378 ************************************ 00:13:58.378 END TEST raid_rebuild_test_sb 00:13:58.378 ************************************ 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.378 00:13:58.378 real 0m24.994s 00:13:58.378 user 0m29.834s 00:13:58.378 sys 0m3.899s 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.378 09:27:32 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:58.378 09:27:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:58.378 09:27:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.378 09:27:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.378 ************************************ 00:13:58.378 START TEST raid_rebuild_test_io 00:13:58.378 ************************************ 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79884 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79884 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79884 ']' 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.378 09:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.378 [2024-12-12 09:27:32.249976] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:58.378 [2024-12-12 09:27:32.250155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.378 Zero copy mechanism will not be used. 00:13:58.378 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79884 ] 00:13:58.638 [2024-12-12 09:27:32.427666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.638 [2024-12-12 09:27:32.560306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.897 [2024-12-12 09:27:32.788791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.897 [2024-12-12 09:27:32.788948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.169 BaseBdev1_malloc 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.169 [2024-12-12 09:27:33.105188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.169 [2024-12-12 09:27:33.105294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.169 [2024-12-12 09:27:33.105334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.169 [2024-12-12 09:27:33.105363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.169 [2024-12-12 09:27:33.107805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.169 [2024-12-12 09:27:33.107877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.169 BaseBdev1 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.169 BaseBdev2_malloc 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.169 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.169 [2024-12-12 09:27:33.165984] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.169 [2024-12-12 09:27:33.166080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.169 [2024-12-12 09:27:33.166117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.169 [2024-12-12 09:27:33.166170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.169 [2024-12-12 09:27:33.168498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.170 [2024-12-12 09:27:33.168569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.170 BaseBdev2 00:13:59.170 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.170 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.170 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.170 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.170 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 BaseBdev3_malloc 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 [2024-12-12 09:27:33.260188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:59.430 [2024-12-12 09:27:33.260291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.430 [2024-12-12 09:27:33.260318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.430 [2024-12-12 09:27:33.260346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.430 [2024-12-12 09:27:33.262795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.430 [2024-12-12 09:27:33.262834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.430 BaseBdev3 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 BaseBdev4_malloc 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 [2024-12-12 09:27:33.320420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:59.430 [2024-12-12 09:27:33.320474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.430 [2024-12-12 09:27:33.320495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:59.430 [2024-12-12 09:27:33.320506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.430 [2024-12-12 09:27:33.322833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.430 [2024-12-12 09:27:33.322870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:59.430 BaseBdev4 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 spare_malloc 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 spare_delay 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 [2024-12-12 09:27:33.393133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.430 [2024-12-12 09:27:33.393179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.430 [2024-12-12 09:27:33.393194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:59.430 [2024-12-12 09:27:33.393205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.430 [2024-12-12 09:27:33.395503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.430 [2024-12-12 09:27:33.395540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.430 spare 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 [2024-12-12 09:27:33.405161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.430 [2024-12-12 09:27:33.407212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.430 [2024-12-12 09:27:33.407277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.430 [2024-12-12 09:27:33.407327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.430 [2024-12-12 09:27:33.407416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.430 [2024-12-12 09:27:33.407432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.430 [2024-12-12 09:27:33.407693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:59.430 [2024-12-12 09:27:33.407887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.430 [2024-12-12 09:27:33.407900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.430 [2024-12-12 09:27:33.408050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.430 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.690 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.690 "name": "raid_bdev1", 00:13:59.690 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:13:59.690 "strip_size_kb": 0, 00:13:59.690 "state": "online", 00:13:59.690 "raid_level": "raid1", 00:13:59.690 "superblock": false, 00:13:59.690 "num_base_bdevs": 4, 00:13:59.690 "num_base_bdevs_discovered": 4, 00:13:59.690 "num_base_bdevs_operational": 4, 00:13:59.690 "base_bdevs_list": [ 00:13:59.690 { 00:13:59.690 "name": "BaseBdev1", 00:13:59.690 "uuid": "57d3054c-ede1-5918-acf6-28e4c6e34fea", 00:13:59.690 "is_configured": true, 00:13:59.690 "data_offset": 0, 00:13:59.690 "data_size": 65536 00:13:59.690 }, 00:13:59.690 { 00:13:59.690 "name": "BaseBdev2", 00:13:59.690 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:13:59.690 "is_configured": true, 00:13:59.690 "data_offset": 0, 00:13:59.690 "data_size": 65536 00:13:59.690 }, 00:13:59.690 { 00:13:59.690 "name": "BaseBdev3", 00:13:59.690 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:13:59.690 "is_configured": true, 00:13:59.690 "data_offset": 0, 00:13:59.690 "data_size": 65536 00:13:59.690 }, 00:13:59.690 { 00:13:59.690 "name": "BaseBdev4", 00:13:59.690 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:13:59.690 "is_configured": true, 00:13:59.690 "data_offset": 0, 00:13:59.690 "data_size": 65536 00:13:59.690 } 00:13:59.690 ] 00:13:59.690 }' 00:13:59.690 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.690 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.950 [2024-12-12 09:27:33.872661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.950 [2024-12-12 09:27:33.948197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.950 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.951 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.210 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.210 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.210 "name": "raid_bdev1", 00:14:00.210 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:00.210 "strip_size_kb": 0, 00:14:00.210 "state": "online", 00:14:00.210 "raid_level": "raid1", 00:14:00.210 "superblock": false, 00:14:00.210 "num_base_bdevs": 4, 00:14:00.210 "num_base_bdevs_discovered": 3, 00:14:00.210 "num_base_bdevs_operational": 3, 00:14:00.210 "base_bdevs_list": [ 00:14:00.210 { 00:14:00.210 "name": null, 00:14:00.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.210 "is_configured": false, 00:14:00.210 "data_offset": 0, 00:14:00.210 "data_size": 65536 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "name": "BaseBdev2", 00:14:00.210 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:14:00.210 "is_configured": true, 00:14:00.210 "data_offset": 0, 00:14:00.210 "data_size": 65536 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "name": "BaseBdev3", 00:14:00.210 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:00.210 "is_configured": true, 00:14:00.210 "data_offset": 0, 00:14:00.210 "data_size": 65536 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "name": "BaseBdev4", 00:14:00.210 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:00.210 "is_configured": true, 00:14:00.210 "data_offset": 0, 00:14:00.210 "data_size": 65536 00:14:00.210 } 00:14:00.210 ] 00:14:00.210 }' 00:14:00.210 09:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.210 09:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.210 [2024-12-12 09:27:34.045520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:00.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:00.211 Zero copy mechanism will not be used. 00:14:00.211 Running I/O for 60 seconds... 00:14:00.470 09:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.470 09:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.470 09:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.470 [2024-12-12 09:27:34.414068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.470 09:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.470 09:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.730 [2024-12-12 09:27:34.495732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:00.730 [2024-12-12 09:27:34.498143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.730 [2024-12-12 09:27:34.616975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.730 [2024-12-12 09:27:34.618010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.990 [2024-12-12 09:27:34.875992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.990 [2024-12-12 09:27:34.877247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.518 173.00 IOPS, 519.00 MiB/s [2024-12-12T09:27:35.541Z] [2024-12-12 09:27:35.332671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.518 [2024-12-12 09:27:35.333179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.518 "name": "raid_bdev1", 00:14:01.518 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:01.518 "strip_size_kb": 0, 00:14:01.518 "state": "online", 00:14:01.518 "raid_level": "raid1", 00:14:01.518 "superblock": false, 00:14:01.518 "num_base_bdevs": 4, 00:14:01.518 "num_base_bdevs_discovered": 4, 00:14:01.518 "num_base_bdevs_operational": 4, 00:14:01.518 "process": { 00:14:01.518 "type": "rebuild", 00:14:01.518 "target": "spare", 00:14:01.518 "progress": { 00:14:01.518 "blocks": 10240, 00:14:01.518 "percent": 15 00:14:01.518 } 00:14:01.518 }, 00:14:01.518 "base_bdevs_list": [ 00:14:01.518 { 00:14:01.518 "name": "spare", 00:14:01.518 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:01.518 "is_configured": true, 00:14:01.518 "data_offset": 0, 00:14:01.518 "data_size": 65536 00:14:01.518 }, 00:14:01.518 { 00:14:01.518 "name": "BaseBdev2", 00:14:01.518 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:14:01.518 "is_configured": true, 00:14:01.518 "data_offset": 0, 00:14:01.518 "data_size": 65536 00:14:01.518 }, 00:14:01.518 { 00:14:01.518 "name": "BaseBdev3", 00:14:01.518 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:01.518 "is_configured": true, 00:14:01.518 "data_offset": 0, 00:14:01.518 "data_size": 65536 00:14:01.518 }, 00:14:01.518 { 00:14:01.518 "name": "BaseBdev4", 00:14:01.518 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:01.518 "is_configured": true, 00:14:01.518 "data_offset": 0, 00:14:01.518 "data_size": 65536 00:14:01.518 } 00:14:01.518 ] 00:14:01.518 }' 00:14:01.518 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.778 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.778 [2024-12-12 09:27:35.614556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.778 [2024-12-12 09:27:35.675006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:01.778 [2024-12-12 09:27:35.784027] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.778 [2024-12-12 09:27:35.792071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.778 [2024-12-12 09:27:35.792188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.778 [2024-12-12 09:27:35.792217] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.038 [2024-12-12 09:27:35.824223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.038 "name": "raid_bdev1", 00:14:02.038 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:02.038 "strip_size_kb": 0, 00:14:02.038 "state": "online", 00:14:02.038 "raid_level": "raid1", 00:14:02.038 "superblock": false, 00:14:02.038 "num_base_bdevs": 4, 00:14:02.038 "num_base_bdevs_discovered": 3, 00:14:02.038 "num_base_bdevs_operational": 3, 00:14:02.038 "base_bdevs_list": [ 00:14:02.038 { 00:14:02.038 "name": null, 00:14:02.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.038 "is_configured": false, 00:14:02.038 "data_offset": 0, 00:14:02.038 "data_size": 65536 00:14:02.038 }, 00:14:02.038 { 00:14:02.038 "name": "BaseBdev2", 00:14:02.038 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:14:02.038 "is_configured": true, 00:14:02.038 "data_offset": 0, 00:14:02.038 "data_size": 65536 00:14:02.038 }, 00:14:02.038 { 00:14:02.038 "name": "BaseBdev3", 00:14:02.038 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:02.038 "is_configured": true, 00:14:02.038 "data_offset": 0, 00:14:02.038 "data_size": 65536 00:14:02.038 }, 00:14:02.038 { 00:14:02.038 "name": "BaseBdev4", 00:14:02.038 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:02.038 "is_configured": true, 00:14:02.038 "data_offset": 0, 00:14:02.038 "data_size": 65536 00:14:02.038 } 00:14:02.038 ] 00:14:02.038 }' 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.038 09:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.608 136.00 IOPS, 408.00 MiB/s [2024-12-12T09:27:36.631Z] 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.608 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.608 "name": "raid_bdev1", 00:14:02.608 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:02.608 "strip_size_kb": 0, 00:14:02.608 "state": "online", 00:14:02.608 "raid_level": "raid1", 00:14:02.608 "superblock": false, 00:14:02.608 "num_base_bdevs": 4, 00:14:02.608 "num_base_bdevs_discovered": 3, 00:14:02.608 "num_base_bdevs_operational": 3, 00:14:02.608 "base_bdevs_list": [ 00:14:02.608 { 00:14:02.608 "name": null, 00:14:02.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.608 "is_configured": false, 00:14:02.608 "data_offset": 0, 00:14:02.608 "data_size": 65536 00:14:02.608 }, 00:14:02.608 { 00:14:02.608 "name": "BaseBdev2", 00:14:02.608 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:14:02.608 "is_configured": true, 00:14:02.608 "data_offset": 0, 00:14:02.608 "data_size": 65536 00:14:02.608 }, 00:14:02.608 { 00:14:02.608 "name": "BaseBdev3", 00:14:02.608 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:02.608 "is_configured": true, 00:14:02.608 "data_offset": 0, 00:14:02.608 "data_size": 65536 00:14:02.608 }, 00:14:02.608 { 00:14:02.609 "name": "BaseBdev4", 00:14:02.609 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:02.609 "is_configured": true, 00:14:02.609 "data_offset": 0, 00:14:02.609 "data_size": 65536 00:14:02.609 } 00:14:02.609 ] 00:14:02.609 }' 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.609 [2024-12-12 09:27:36.504737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.609 09:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:02.609 [2024-12-12 09:27:36.565121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:02.609 [2024-12-12 09:27:36.567396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.868 [2024-12-12 09:27:36.684726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:02.868 [2024-12-12 09:27:36.685688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:03.128 [2024-12-12 09:27:36.893154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.128 [2024-12-12 09:27:36.894533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.388 143.67 IOPS, 431.00 MiB/s [2024-12-12T09:27:37.411Z] [2024-12-12 09:27:37.387489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.648 "name": "raid_bdev1", 00:14:03.648 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:03.648 "strip_size_kb": 0, 00:14:03.648 "state": "online", 00:14:03.648 "raid_level": "raid1", 00:14:03.648 "superblock": false, 00:14:03.648 "num_base_bdevs": 4, 00:14:03.648 "num_base_bdevs_discovered": 4, 00:14:03.648 "num_base_bdevs_operational": 4, 00:14:03.648 "process": { 00:14:03.648 "type": "rebuild", 00:14:03.648 "target": "spare", 00:14:03.648 "progress": { 00:14:03.648 "blocks": 12288, 00:14:03.648 "percent": 18 00:14:03.648 } 00:14:03.648 }, 00:14:03.648 "base_bdevs_list": [ 00:14:03.648 { 00:14:03.648 "name": "spare", 00:14:03.648 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:03.648 "is_configured": true, 00:14:03.648 "data_offset": 0, 00:14:03.648 "data_size": 65536 00:14:03.648 }, 00:14:03.648 { 00:14:03.648 "name": "BaseBdev2", 00:14:03.648 "uuid": "ba002aa8-1668-5cf1-9ee7-25f8431a78ed", 00:14:03.648 "is_configured": true, 00:14:03.648 "data_offset": 0, 00:14:03.648 "data_size": 65536 00:14:03.648 }, 00:14:03.648 { 00:14:03.648 "name": "BaseBdev3", 00:14:03.648 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:03.648 "is_configured": true, 00:14:03.648 "data_offset": 0, 00:14:03.648 "data_size": 65536 00:14:03.648 }, 00:14:03.648 { 00:14:03.648 "name": "BaseBdev4", 00:14:03.648 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:03.648 "is_configured": true, 00:14:03.648 "data_offset": 0, 00:14:03.648 "data_size": 65536 00:14:03.648 } 00:14:03.648 ] 00:14:03.648 }' 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.648 [2024-12-12 09:27:37.617891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:03.648 [2024-12-12 09:27:37.618837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.648 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.908 [2024-12-12 09:27:37.707861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.908 [2024-12-12 09:27:37.769820] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:03.908 [2024-12-12 09:27:37.769923] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:03.908 [2024-12-12 09:27:37.778638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.908 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.908 "name": "raid_bdev1", 00:14:03.908 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:03.908 "strip_size_kb": 0, 00:14:03.908 "state": "online", 00:14:03.908 "raid_level": "raid1", 00:14:03.908 "superblock": false, 00:14:03.908 "num_base_bdevs": 4, 00:14:03.908 "num_base_bdevs_discovered": 3, 00:14:03.908 "num_base_bdevs_operational": 3, 00:14:03.908 "process": { 00:14:03.908 "type": "rebuild", 00:14:03.908 "target": "spare", 00:14:03.908 "progress": { 00:14:03.908 "blocks": 16384, 00:14:03.908 "percent": 25 00:14:03.908 } 00:14:03.908 }, 00:14:03.908 "base_bdevs_list": [ 00:14:03.908 { 00:14:03.908 "name": "spare", 00:14:03.908 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:03.908 "is_configured": true, 00:14:03.908 "data_offset": 0, 00:14:03.908 "data_size": 65536 00:14:03.908 }, 00:14:03.908 { 00:14:03.908 "name": null, 00:14:03.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.908 "is_configured": false, 00:14:03.908 "data_offset": 0, 00:14:03.908 "data_size": 65536 00:14:03.908 }, 00:14:03.908 { 00:14:03.908 "name": "BaseBdev3", 00:14:03.908 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:03.908 "is_configured": true, 00:14:03.908 "data_offset": 0, 00:14:03.908 "data_size": 65536 00:14:03.908 }, 00:14:03.908 { 00:14:03.909 "name": "BaseBdev4", 00:14:03.909 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:03.909 "is_configured": true, 00:14:03.909 "data_offset": 0, 00:14:03.909 "data_size": 65536 00:14:03.909 } 00:14:03.909 ] 00:14:03.909 }' 00:14:03.909 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.909 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.909 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=483 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.169 "name": "raid_bdev1", 00:14:04.169 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:04.169 "strip_size_kb": 0, 00:14:04.169 "state": "online", 00:14:04.169 "raid_level": "raid1", 00:14:04.169 "superblock": false, 00:14:04.169 "num_base_bdevs": 4, 00:14:04.169 "num_base_bdevs_discovered": 3, 00:14:04.169 "num_base_bdevs_operational": 3, 00:14:04.169 "process": { 00:14:04.169 "type": "rebuild", 00:14:04.169 "target": "spare", 00:14:04.169 "progress": { 00:14:04.169 "blocks": 16384, 00:14:04.169 "percent": 25 00:14:04.169 } 00:14:04.169 }, 00:14:04.169 "base_bdevs_list": [ 00:14:04.169 { 00:14:04.169 "name": "spare", 00:14:04.169 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 }, 00:14:04.169 { 00:14:04.169 "name": null, 00:14:04.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.169 "is_configured": false, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 }, 00:14:04.169 { 00:14:04.169 "name": "BaseBdev3", 00:14:04.169 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 }, 00:14:04.169 { 00:14:04.169 "name": "BaseBdev4", 00:14:04.169 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:04.169 "is_configured": true, 00:14:04.169 "data_offset": 0, 00:14:04.169 "data_size": 65536 00:14:04.169 } 00:14:04.169 ] 00:14:04.169 }' 00:14:04.169 09:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.169 09:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.169 09:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.169 133.50 IOPS, 400.50 MiB/s [2024-12-12T09:27:38.192Z] 09:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.169 09:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.169 [2024-12-12 09:27:38.107289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:04.429 [2024-12-12 09:27:38.217278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:04.429 [2024-12-12 09:27:38.218169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:04.999 [2024-12-12 09:27:38.871400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:05.259 121.20 IOPS, 363.60 MiB/s [2024-12-12T09:27:39.282Z] 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.259 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.259 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.260 "name": "raid_bdev1", 00:14:05.260 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:05.260 "strip_size_kb": 0, 00:14:05.260 "state": "online", 00:14:05.260 "raid_level": "raid1", 00:14:05.260 "superblock": false, 00:14:05.260 "num_base_bdevs": 4, 00:14:05.260 "num_base_bdevs_discovered": 3, 00:14:05.260 "num_base_bdevs_operational": 3, 00:14:05.260 "process": { 00:14:05.260 "type": "rebuild", 00:14:05.260 "target": "spare", 00:14:05.260 "progress": { 00:14:05.260 "blocks": 34816, 00:14:05.260 "percent": 53 00:14:05.260 } 00:14:05.260 }, 00:14:05.260 "base_bdevs_list": [ 00:14:05.260 { 00:14:05.260 "name": "spare", 00:14:05.260 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:05.260 "is_configured": true, 00:14:05.260 "data_offset": 0, 00:14:05.260 "data_size": 65536 00:14:05.260 }, 00:14:05.260 { 00:14:05.260 "name": null, 00:14:05.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.260 "is_configured": false, 00:14:05.260 "data_offset": 0, 00:14:05.260 "data_size": 65536 00:14:05.260 }, 00:14:05.260 { 00:14:05.260 "name": "BaseBdev3", 00:14:05.260 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:05.260 "is_configured": true, 00:14:05.260 "data_offset": 0, 00:14:05.260 "data_size": 65536 00:14:05.260 }, 00:14:05.260 { 00:14:05.260 "name": "BaseBdev4", 00:14:05.260 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:05.260 "is_configured": true, 00:14:05.260 "data_offset": 0, 00:14:05.260 "data_size": 65536 00:14:05.260 } 00:14:05.260 ] 00:14:05.260 }' 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.260 09:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.830 [2024-12-12 09:27:39.597343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:06.089 [2024-12-12 09:27:40.010255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:06.349 111.33 IOPS, 334.00 MiB/s [2024-12-12T09:27:40.372Z] 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.349 "name": "raid_bdev1", 00:14:06.349 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:06.349 "strip_size_kb": 0, 00:14:06.349 "state": "online", 00:14:06.349 "raid_level": "raid1", 00:14:06.349 "superblock": false, 00:14:06.349 "num_base_bdevs": 4, 00:14:06.349 "num_base_bdevs_discovered": 3, 00:14:06.349 "num_base_bdevs_operational": 3, 00:14:06.349 "process": { 00:14:06.349 "type": "rebuild", 00:14:06.349 "target": "spare", 00:14:06.349 "progress": { 00:14:06.349 "blocks": 51200, 00:14:06.349 "percent": 78 00:14:06.349 } 00:14:06.349 }, 00:14:06.349 "base_bdevs_list": [ 00:14:06.349 { 00:14:06.349 "name": "spare", 00:14:06.349 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:06.349 "is_configured": true, 00:14:06.349 "data_offset": 0, 00:14:06.349 "data_size": 65536 00:14:06.349 }, 00:14:06.349 { 00:14:06.349 "name": null, 00:14:06.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.349 "is_configured": false, 00:14:06.349 "data_offset": 0, 00:14:06.349 "data_size": 65536 00:14:06.349 }, 00:14:06.349 { 00:14:06.349 "name": "BaseBdev3", 00:14:06.349 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:06.349 "is_configured": true, 00:14:06.349 "data_offset": 0, 00:14:06.349 "data_size": 65536 00:14:06.349 }, 00:14:06.349 { 00:14:06.349 "name": "BaseBdev4", 00:14:06.349 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:06.349 "is_configured": true, 00:14:06.349 "data_offset": 0, 00:14:06.349 "data_size": 65536 00:14:06.349 } 00:14:06.349 ] 00:14:06.349 }' 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.349 09:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.609 [2024-12-12 09:27:40.553131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:07.180 [2024-12-12 09:27:40.996417] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.180 99.43 IOPS, 298.29 MiB/s [2024-12-12T09:27:41.203Z] [2024-12-12 09:27:41.096248] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.180 [2024-12-12 09:27:41.100035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.440 "name": "raid_bdev1", 00:14:07.440 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:07.440 "strip_size_kb": 0, 00:14:07.440 "state": "online", 00:14:07.440 "raid_level": "raid1", 00:14:07.440 "superblock": false, 00:14:07.440 "num_base_bdevs": 4, 00:14:07.440 "num_base_bdevs_discovered": 3, 00:14:07.440 "num_base_bdevs_operational": 3, 00:14:07.440 "base_bdevs_list": [ 00:14:07.440 { 00:14:07.440 "name": "spare", 00:14:07.440 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:07.440 "is_configured": true, 00:14:07.440 "data_offset": 0, 00:14:07.440 "data_size": 65536 00:14:07.440 }, 00:14:07.440 { 00:14:07.440 "name": null, 00:14:07.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.440 "is_configured": false, 00:14:07.440 "data_offset": 0, 00:14:07.440 "data_size": 65536 00:14:07.440 }, 00:14:07.440 { 00:14:07.440 "name": "BaseBdev3", 00:14:07.440 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:07.440 "is_configured": true, 00:14:07.440 "data_offset": 0, 00:14:07.440 "data_size": 65536 00:14:07.440 }, 00:14:07.440 { 00:14:07.440 "name": "BaseBdev4", 00:14:07.440 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:07.440 "is_configured": true, 00:14:07.440 "data_offset": 0, 00:14:07.440 "data_size": 65536 00:14:07.440 } 00:14:07.440 ] 00:14:07.440 }' 00:14:07.440 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.701 "name": "raid_bdev1", 00:14:07.701 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:07.701 "strip_size_kb": 0, 00:14:07.701 "state": "online", 00:14:07.701 "raid_level": "raid1", 00:14:07.701 "superblock": false, 00:14:07.701 "num_base_bdevs": 4, 00:14:07.701 "num_base_bdevs_discovered": 3, 00:14:07.701 "num_base_bdevs_operational": 3, 00:14:07.701 "base_bdevs_list": [ 00:14:07.701 { 00:14:07.701 "name": "spare", 00:14:07.701 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:07.701 "is_configured": true, 00:14:07.701 "data_offset": 0, 00:14:07.701 "data_size": 65536 00:14:07.701 }, 00:14:07.701 { 00:14:07.701 "name": null, 00:14:07.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.701 "is_configured": false, 00:14:07.701 "data_offset": 0, 00:14:07.701 "data_size": 65536 00:14:07.701 }, 00:14:07.701 { 00:14:07.701 "name": "BaseBdev3", 00:14:07.701 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:07.701 "is_configured": true, 00:14:07.701 "data_offset": 0, 00:14:07.701 "data_size": 65536 00:14:07.701 }, 00:14:07.701 { 00:14:07.701 "name": "BaseBdev4", 00:14:07.701 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:07.701 "is_configured": true, 00:14:07.701 "data_offset": 0, 00:14:07.701 "data_size": 65536 00:14:07.701 } 00:14:07.701 ] 00:14:07.701 }' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.701 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.961 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.961 "name": "raid_bdev1", 00:14:07.961 "uuid": "ce7e9a01-bec2-46e0-b354-0652a9b325eb", 00:14:07.961 "strip_size_kb": 0, 00:14:07.961 "state": "online", 00:14:07.961 "raid_level": "raid1", 00:14:07.961 "superblock": false, 00:14:07.961 "num_base_bdevs": 4, 00:14:07.961 "num_base_bdevs_discovered": 3, 00:14:07.961 "num_base_bdevs_operational": 3, 00:14:07.961 "base_bdevs_list": [ 00:14:07.961 { 00:14:07.961 "name": "spare", 00:14:07.961 "uuid": "6fc65ae8-a2dd-5ebd-9136-05526d175655", 00:14:07.961 "is_configured": true, 00:14:07.961 "data_offset": 0, 00:14:07.961 "data_size": 65536 00:14:07.961 }, 00:14:07.961 { 00:14:07.961 "name": null, 00:14:07.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.961 "is_configured": false, 00:14:07.961 "data_offset": 0, 00:14:07.961 "data_size": 65536 00:14:07.961 }, 00:14:07.961 { 00:14:07.961 "name": "BaseBdev3", 00:14:07.961 "uuid": "34320a86-34fe-510d-a919-9965ab665007", 00:14:07.961 "is_configured": true, 00:14:07.961 "data_offset": 0, 00:14:07.961 "data_size": 65536 00:14:07.961 }, 00:14:07.961 { 00:14:07.961 "name": "BaseBdev4", 00:14:07.961 "uuid": "03a7201e-0293-51f1-b0b4-c79fabb9bdbb", 00:14:07.961 "is_configured": true, 00:14:07.961 "data_offset": 0, 00:14:07.961 "data_size": 65536 00:14:07.961 } 00:14:07.961 ] 00:14:07.961 }' 00:14:07.961 09:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.961 09:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.221 91.75 IOPS, 275.25 MiB/s [2024-12-12T09:27:42.244Z] 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.221 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.221 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.221 [2024-12-12 09:27:42.145180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.221 [2024-12-12 09:27:42.145271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.221 00:14:08.221 Latency(us) 00:14:08.221 [2024-12-12T09:27:42.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.221 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:08.221 raid_bdev1 : 8.17 90.42 271.27 0.00 0.00 15579.16 300.49 119052.30 00:14:08.221 [2024-12-12T09:27:42.244Z] =================================================================================================================== 00:14:08.221 [2024-12-12T09:27:42.244Z] Total : 90.42 271.27 0.00 0.00 15579.16 300.49 119052.30 00:14:08.221 [2024-12-12 09:27:42.225780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.221 [2024-12-12 09:27:42.225896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.221 { 00:14:08.221 "results": [ 00:14:08.221 { 00:14:08.221 "job": "raid_bdev1", 00:14:08.221 "core_mask": "0x1", 00:14:08.221 "workload": "randrw", 00:14:08.221 "percentage": 50, 00:14:08.221 "status": "finished", 00:14:08.221 "queue_depth": 2, 00:14:08.221 "io_size": 3145728, 00:14:08.221 "runtime": 8.172817, 00:14:08.221 "iops": 90.42169915220174, 00:14:08.221 "mibps": 271.2650974566052, 00:14:08.221 "io_failed": 0, 00:14:08.221 "io_timeout": 0, 00:14:08.221 "avg_latency_us": 15579.157577512395, 00:14:08.221 "min_latency_us": 300.49257641921395, 00:14:08.221 "max_latency_us": 119052.29694323144 00:14:08.221 } 00:14:08.221 ], 00:14:08.221 "core_count": 1 00:14:08.221 } 00:14:08.221 [2024-12-12 09:27:42.226065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.222 [2024-12-12 09:27:42.226085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:08.222 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.222 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.222 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.222 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:08.222 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:08.481 /dev/nbd0 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:08.481 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:08.741 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.741 1+0 records in 00:14:08.741 1+0 records out 00:14:08.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273251 s, 15.0 MB/s 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:08.742 /dev/nbd1 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.742 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.002 1+0 records in 00:14:09.002 1+0 records out 00:14:09.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375061 s, 10.9 MB/s 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.002 09:27:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.262 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:09.522 /dev/nbd1 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:09.522 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.523 1+0 records in 00:14:09.523 1+0 records out 00:14:09.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378919 s, 10.8 MB/s 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.523 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.782 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79884 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79884 ']' 00:14:10.042 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79884 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79884 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79884' 00:14:10.043 killing process with pid 79884 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79884 00:14:10.043 Received shutdown signal, test time was about 9.921182 seconds 00:14:10.043 00:14:10.043 Latency(us) 00:14:10.043 [2024-12-12T09:27:44.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.043 [2024-12-12T09:27:44.066Z] =================================================================================================================== 00:14:10.043 [2024-12-12T09:27:44.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.043 [2024-12-12 09:27:43.950013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.043 09:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79884 00:14:10.613 [2024-12-12 09:27:44.387038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.001 00:14:12.001 real 0m13.462s 00:14:12.001 user 0m16.805s 00:14:12.001 sys 0m1.970s 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 ************************************ 00:14:12.001 END TEST raid_rebuild_test_io 00:14:12.001 ************************************ 00:14:12.001 09:27:45 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:12.001 09:27:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.001 09:27:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.001 09:27:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 ************************************ 00:14:12.001 START TEST raid_rebuild_test_sb_io 00:14:12.001 ************************************ 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80294 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80294 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 80294 ']' 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.001 09:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.001 Zero copy mechanism will not be used. 00:14:12.001 [2024-12-12 09:27:45.801134] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:12.001 [2024-12-12 09:27:45.801260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80294 ] 00:14:12.001 [2024-12-12 09:27:45.975608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.260 [2024-12-12 09:27:46.107596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.520 [2024-12-12 09:27:46.317141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.520 [2024-12-12 09:27:46.317199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 BaseBdev1_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 [2024-12-12 09:27:46.658017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.780 [2024-12-12 09:27:46.658081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.780 [2024-12-12 09:27:46.658105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.780 [2024-12-12 09:27:46.658117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.780 [2024-12-12 09:27:46.660508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.780 [2024-12-12 09:27:46.660548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.780 BaseBdev1 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 BaseBdev2_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 [2024-12-12 09:27:46.715116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.780 [2024-12-12 09:27:46.715175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.780 [2024-12-12 09:27:46.715196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.780 [2024-12-12 09:27:46.715207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.780 [2024-12-12 09:27:46.717575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.780 [2024-12-12 09:27:46.717612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.780 BaseBdev2 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 BaseBdev3_malloc 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.780 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.780 [2024-12-12 09:27:46.802143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.781 [2024-12-12 09:27:46.802192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.781 [2024-12-12 09:27:46.802215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.781 [2024-12-12 09:27:46.802227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.040 [2024-12-12 09:27:46.804600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.040 [2024-12-12 09:27:46.804637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.040 BaseBdev3 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.040 BaseBdev4_malloc 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.040 [2024-12-12 09:27:46.859025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:13.040 [2024-12-12 09:27:46.859079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.040 [2024-12-12 09:27:46.859100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:13.040 [2024-12-12 09:27:46.859111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.040 [2024-12-12 09:27:46.861437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.040 [2024-12-12 09:27:46.861477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:13.040 BaseBdev4 00:14:13.040 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 spare_malloc 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 spare_delay 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 [2024-12-12 09:27:46.927880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.041 [2024-12-12 09:27:46.927928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.041 [2024-12-12 09:27:46.927944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:13.041 [2024-12-12 09:27:46.927966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.041 [2024-12-12 09:27:46.930309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.041 [2024-12-12 09:27:46.930345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.041 spare 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 [2024-12-12 09:27:46.939917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.041 [2024-12-12 09:27:46.942010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.041 [2024-12-12 09:27:46.942078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.041 [2024-12-12 09:27:46.942128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.041 [2024-12-12 09:27:46.942338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:13.041 [2024-12-12 09:27:46.942359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:13.041 [2024-12-12 09:27:46.942602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.041 [2024-12-12 09:27:46.942802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:13.041 [2024-12-12 09:27:46.942817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:13.041 [2024-12-12 09:27:46.942977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.041 "name": "raid_bdev1", 00:14:13.041 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:13.041 "strip_size_kb": 0, 00:14:13.041 "state": "online", 00:14:13.041 "raid_level": "raid1", 00:14:13.041 "superblock": true, 00:14:13.041 "num_base_bdevs": 4, 00:14:13.041 "num_base_bdevs_discovered": 4, 00:14:13.041 "num_base_bdevs_operational": 4, 00:14:13.041 "base_bdevs_list": [ 00:14:13.041 { 00:14:13.041 "name": "BaseBdev1", 00:14:13.041 "uuid": "e8166c8a-db22-5bc7-ae81-912c46d7049b", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.041 "data_size": 63488 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "name": "BaseBdev2", 00:14:13.041 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.041 "data_size": 63488 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "name": "BaseBdev3", 00:14:13.041 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.041 "data_size": 63488 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "name": "BaseBdev4", 00:14:13.041 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.041 "data_size": 63488 00:14:13.041 } 00:14:13.041 ] 00:14:13.041 }' 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.041 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 [2024-12-12 09:27:47.447489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 [2024-12-12 09:27:47.531041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.610 "name": "raid_bdev1", 00:14:13.610 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:13.610 "strip_size_kb": 0, 00:14:13.610 "state": "online", 00:14:13.610 "raid_level": "raid1", 00:14:13.610 "superblock": true, 00:14:13.610 "num_base_bdevs": 4, 00:14:13.610 "num_base_bdevs_discovered": 3, 00:14:13.610 "num_base_bdevs_operational": 3, 00:14:13.610 "base_bdevs_list": [ 00:14:13.610 { 00:14:13.610 "name": null, 00:14:13.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.610 "is_configured": false, 00:14:13.610 "data_offset": 0, 00:14:13.610 "data_size": 63488 00:14:13.610 }, 00:14:13.610 { 00:14:13.610 "name": "BaseBdev2", 00:14:13.610 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:13.610 "is_configured": true, 00:14:13.610 "data_offset": 2048, 00:14:13.610 "data_size": 63488 00:14:13.610 }, 00:14:13.610 { 00:14:13.610 "name": "BaseBdev3", 00:14:13.610 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:13.610 "is_configured": true, 00:14:13.610 "data_offset": 2048, 00:14:13.610 "data_size": 63488 00:14:13.610 }, 00:14:13.610 { 00:14:13.610 "name": "BaseBdev4", 00:14:13.610 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:13.610 "is_configured": true, 00:14:13.610 "data_offset": 2048, 00:14:13.610 "data_size": 63488 00:14:13.610 } 00:14:13.610 ] 00:14:13.610 }' 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.610 09:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.610 [2024-12-12 09:27:47.632003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:13.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:13.870 Zero copy mechanism will not be used. 00:14:13.870 Running I/O for 60 seconds... 00:14:14.130 09:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.130 09:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.130 09:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.130 [2024-12-12 09:27:48.019943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.130 09:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.130 09:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.130 [2024-12-12 09:27:48.088265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:14.130 [2024-12-12 09:27:48.090600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.390 [2024-12-12 09:27:48.208990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.390 [2024-12-12 09:27:48.211146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.650 [2024-12-12 09:27:48.436799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.650 [2024-12-12 09:27:48.437303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.650 142.00 IOPS, 426.00 MiB/s [2024-12-12T09:27:48.673Z] [2024-12-12 09:27:48.668707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:14.650 [2024-12-12 09:27:48.669251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:14.910 [2024-12-12 09:27:48.782030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.170 "name": "raid_bdev1", 00:14:15.170 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:15.170 "strip_size_kb": 0, 00:14:15.170 "state": "online", 00:14:15.170 "raid_level": "raid1", 00:14:15.170 "superblock": true, 00:14:15.170 "num_base_bdevs": 4, 00:14:15.170 "num_base_bdevs_discovered": 4, 00:14:15.170 "num_base_bdevs_operational": 4, 00:14:15.170 "process": { 00:14:15.170 "type": "rebuild", 00:14:15.170 "target": "spare", 00:14:15.170 "progress": { 00:14:15.170 "blocks": 12288, 00:14:15.170 "percent": 19 00:14:15.170 } 00:14:15.170 }, 00:14:15.170 "base_bdevs_list": [ 00:14:15.170 { 00:14:15.170 "name": "spare", 00:14:15.170 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:15.170 "is_configured": true, 00:14:15.170 "data_offset": 2048, 00:14:15.170 "data_size": 63488 00:14:15.170 }, 00:14:15.170 { 00:14:15.170 "name": "BaseBdev2", 00:14:15.170 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:15.170 "is_configured": true, 00:14:15.170 "data_offset": 2048, 00:14:15.170 "data_size": 63488 00:14:15.170 }, 00:14:15.170 { 00:14:15.170 "name": "BaseBdev3", 00:14:15.170 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:15.170 "is_configured": true, 00:14:15.170 "data_offset": 2048, 00:14:15.170 "data_size": 63488 00:14:15.170 }, 00:14:15.170 { 00:14:15.170 "name": "BaseBdev4", 00:14:15.170 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:15.170 "is_configured": true, 00:14:15.170 "data_offset": 2048, 00:14:15.170 "data_size": 63488 00:14:15.170 } 00:14:15.170 ] 00:14:15.170 }' 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.170 [2024-12-12 09:27:49.147741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.170 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.430 [2024-12-12 09:27:49.227803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.430 [2024-12-12 09:27:49.267911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:15.430 [2024-12-12 09:27:49.279777] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.430 [2024-12-12 09:27:49.294388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.430 [2024-12-12 09:27:49.294468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.430 [2024-12-12 09:27:49.294483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.430 [2024-12-12 09:27:49.333303] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.430 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.430 "name": "raid_bdev1", 00:14:15.430 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:15.430 "strip_size_kb": 0, 00:14:15.430 "state": "online", 00:14:15.430 "raid_level": "raid1", 00:14:15.430 "superblock": true, 00:14:15.430 "num_base_bdevs": 4, 00:14:15.430 "num_base_bdevs_discovered": 3, 00:14:15.430 "num_base_bdevs_operational": 3, 00:14:15.430 "base_bdevs_list": [ 00:14:15.430 { 00:14:15.430 "name": null, 00:14:15.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.430 "is_configured": false, 00:14:15.430 "data_offset": 0, 00:14:15.430 "data_size": 63488 00:14:15.430 }, 00:14:15.430 { 00:14:15.431 "name": "BaseBdev2", 00:14:15.431 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:15.431 "is_configured": true, 00:14:15.431 "data_offset": 2048, 00:14:15.431 "data_size": 63488 00:14:15.431 }, 00:14:15.431 { 00:14:15.431 "name": "BaseBdev3", 00:14:15.431 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:15.431 "is_configured": true, 00:14:15.431 "data_offset": 2048, 00:14:15.431 "data_size": 63488 00:14:15.431 }, 00:14:15.431 { 00:14:15.431 "name": "BaseBdev4", 00:14:15.431 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:15.431 "is_configured": true, 00:14:15.431 "data_offset": 2048, 00:14:15.431 "data_size": 63488 00:14:15.431 } 00:14:15.431 ] 00:14:15.431 }' 00:14:15.431 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.431 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.950 130.50 IOPS, 391.50 MiB/s [2024-12-12T09:27:49.973Z] 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.950 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.950 "name": "raid_bdev1", 00:14:15.950 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:15.950 "strip_size_kb": 0, 00:14:15.950 "state": "online", 00:14:15.950 "raid_level": "raid1", 00:14:15.950 "superblock": true, 00:14:15.950 "num_base_bdevs": 4, 00:14:15.950 "num_base_bdevs_discovered": 3, 00:14:15.950 "num_base_bdevs_operational": 3, 00:14:15.950 "base_bdevs_list": [ 00:14:15.950 { 00:14:15.950 "name": null, 00:14:15.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.950 "is_configured": false, 00:14:15.950 "data_offset": 0, 00:14:15.950 "data_size": 63488 00:14:15.950 }, 00:14:15.950 { 00:14:15.950 "name": "BaseBdev2", 00:14:15.950 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:15.950 "is_configured": true, 00:14:15.950 "data_offset": 2048, 00:14:15.950 "data_size": 63488 00:14:15.950 }, 00:14:15.950 { 00:14:15.950 "name": "BaseBdev3", 00:14:15.950 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:15.950 "is_configured": true, 00:14:15.950 "data_offset": 2048, 00:14:15.950 "data_size": 63488 00:14:15.950 }, 00:14:15.950 { 00:14:15.950 "name": "BaseBdev4", 00:14:15.950 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:15.950 "is_configured": true, 00:14:15.951 "data_offset": 2048, 00:14:15.951 "data_size": 63488 00:14:15.951 } 00:14:15.951 ] 00:14:15.951 }' 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.951 09:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.951 [2024-12-12 09:27:49.959020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.210 09:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.210 09:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.210 [2024-12-12 09:27:50.033254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:16.210 [2024-12-12 09:27:50.035553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.210 [2024-12-12 09:27:50.139277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:16.211 [2024-12-12 09:27:50.140079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:16.470 [2024-12-12 09:27:50.246942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.470 [2024-12-12 09:27:50.247182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.730 136.00 IOPS, 408.00 MiB/s [2024-12-12T09:27:50.753Z] [2024-12-12 09:27:50.728390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.990 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.990 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.990 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.990 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.990 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.250 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.250 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.251 "name": "raid_bdev1", 00:14:17.251 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:17.251 "strip_size_kb": 0, 00:14:17.251 "state": "online", 00:14:17.251 "raid_level": "raid1", 00:14:17.251 "superblock": true, 00:14:17.251 "num_base_bdevs": 4, 00:14:17.251 "num_base_bdevs_discovered": 4, 00:14:17.251 "num_base_bdevs_operational": 4, 00:14:17.251 "process": { 00:14:17.251 "type": "rebuild", 00:14:17.251 "target": "spare", 00:14:17.251 "progress": { 00:14:17.251 "blocks": 12288, 00:14:17.251 "percent": 19 00:14:17.251 } 00:14:17.251 }, 00:14:17.251 "base_bdevs_list": [ 00:14:17.251 { 00:14:17.251 "name": "spare", 00:14:17.251 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:17.251 "is_configured": true, 00:14:17.251 "data_offset": 2048, 00:14:17.251 "data_size": 63488 00:14:17.251 }, 00:14:17.251 { 00:14:17.251 "name": "BaseBdev2", 00:14:17.251 "uuid": "f5bf7977-b895-58c9-a26f-f9a878af4d8c", 00:14:17.251 "is_configured": true, 00:14:17.251 "data_offset": 2048, 00:14:17.251 "data_size": 63488 00:14:17.251 }, 00:14:17.251 { 00:14:17.251 "name": "BaseBdev3", 00:14:17.251 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:17.251 "is_configured": true, 00:14:17.251 "data_offset": 2048, 00:14:17.251 "data_size": 63488 00:14:17.251 }, 00:14:17.251 { 00:14:17.251 "name": "BaseBdev4", 00:14:17.251 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:17.251 "is_configured": true, 00:14:17.251 "data_offset": 2048, 00:14:17.251 "data_size": 63488 00:14:17.251 } 00:14:17.251 ] 00:14:17.251 }' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.251 [2024-12-12 09:27:51.081321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:17.251 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.251 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.251 [2024-12-12 09:27:51.170810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.251 [2024-12-12 09:27:51.198885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:17.511 [2024-12-12 09:27:51.408095] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:17.511 [2024-12-12 09:27:51.408136] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.511 "name": "raid_bdev1", 00:14:17.511 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:17.511 "strip_size_kb": 0, 00:14:17.511 "state": "online", 00:14:17.511 "raid_level": "raid1", 00:14:17.511 "superblock": true, 00:14:17.511 "num_base_bdevs": 4, 00:14:17.511 "num_base_bdevs_discovered": 3, 00:14:17.511 "num_base_bdevs_operational": 3, 00:14:17.511 "process": { 00:14:17.511 "type": "rebuild", 00:14:17.511 "target": "spare", 00:14:17.511 "progress": { 00:14:17.511 "blocks": 16384, 00:14:17.511 "percent": 25 00:14:17.511 } 00:14:17.511 }, 00:14:17.511 "base_bdevs_list": [ 00:14:17.511 { 00:14:17.511 "name": "spare", 00:14:17.511 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:17.511 "is_configured": true, 00:14:17.511 "data_offset": 2048, 00:14:17.511 "data_size": 63488 00:14:17.511 }, 00:14:17.511 { 00:14:17.511 "name": null, 00:14:17.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.511 "is_configured": false, 00:14:17.511 "data_offset": 0, 00:14:17.511 "data_size": 63488 00:14:17.511 }, 00:14:17.511 { 00:14:17.511 "name": "BaseBdev3", 00:14:17.511 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:17.511 "is_configured": true, 00:14:17.511 "data_offset": 2048, 00:14:17.511 "data_size": 63488 00:14:17.511 }, 00:14:17.511 { 00:14:17.511 "name": "BaseBdev4", 00:14:17.511 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:17.511 "is_configured": true, 00:14:17.511 "data_offset": 2048, 00:14:17.511 "data_size": 63488 00:14:17.511 } 00:14:17.511 ] 00:14:17.511 }' 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.511 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=497 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.771 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.771 "name": "raid_bdev1", 00:14:17.771 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:17.771 "strip_size_kb": 0, 00:14:17.771 "state": "online", 00:14:17.771 "raid_level": "raid1", 00:14:17.771 "superblock": true, 00:14:17.771 "num_base_bdevs": 4, 00:14:17.771 "num_base_bdevs_discovered": 3, 00:14:17.771 "num_base_bdevs_operational": 3, 00:14:17.771 "process": { 00:14:17.771 "type": "rebuild", 00:14:17.771 "target": "spare", 00:14:17.771 "progress": { 00:14:17.771 "blocks": 18432, 00:14:17.771 "percent": 29 00:14:17.771 } 00:14:17.771 }, 00:14:17.771 "base_bdevs_list": [ 00:14:17.771 { 00:14:17.771 "name": "spare", 00:14:17.772 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:17.772 "is_configured": true, 00:14:17.772 "data_offset": 2048, 00:14:17.772 "data_size": 63488 00:14:17.772 }, 00:14:17.772 { 00:14:17.772 "name": null, 00:14:17.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.772 "is_configured": false, 00:14:17.772 "data_offset": 0, 00:14:17.772 "data_size": 63488 00:14:17.772 }, 00:14:17.772 { 00:14:17.772 "name": "BaseBdev3", 00:14:17.772 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:17.772 "is_configured": true, 00:14:17.772 "data_offset": 2048, 00:14:17.772 "data_size": 63488 00:14:17.772 }, 00:14:17.772 { 00:14:17.772 "name": "BaseBdev4", 00:14:17.772 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:17.772 "is_configured": true, 00:14:17.772 "data_offset": 2048, 00:14:17.772 "data_size": 63488 00:14:17.772 } 00:14:17.772 ] 00:14:17.772 }' 00:14:17.772 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.772 129.00 IOPS, 387.00 MiB/s [2024-12-12T09:27:51.795Z] 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.772 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.772 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.772 09:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.772 [2024-12-12 09:27:51.746835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:17.772 [2024-12-12 09:27:51.747344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:18.032 [2024-12-12 09:27:51.985612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:18.292 [2024-12-12 09:27:52.095872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:18.553 [2024-12-12 09:27:52.487320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:18.813 115.80 IOPS, 347.40 MiB/s [2024-12-12T09:27:52.836Z] [2024-12-12 09:27:52.713150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.813 [2024-12-12 09:27:52.714563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.813 "name": "raid_bdev1", 00:14:18.813 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:18.813 "strip_size_kb": 0, 00:14:18.813 "state": "online", 00:14:18.813 "raid_level": "raid1", 00:14:18.813 "superblock": true, 00:14:18.813 "num_base_bdevs": 4, 00:14:18.813 "num_base_bdevs_discovered": 3, 00:14:18.813 "num_base_bdevs_operational": 3, 00:14:18.813 "process": { 00:14:18.813 "type": "rebuild", 00:14:18.813 "target": "spare", 00:14:18.813 "progress": { 00:14:18.813 "blocks": 38912, 00:14:18.813 "percent": 61 00:14:18.813 } 00:14:18.813 }, 00:14:18.813 "base_bdevs_list": [ 00:14:18.813 { 00:14:18.813 "name": "spare", 00:14:18.813 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 2048, 00:14:18.813 "data_size": 63488 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": null, 00:14:18.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.813 "is_configured": false, 00:14:18.813 "data_offset": 0, 00:14:18.813 "data_size": 63488 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": "BaseBdev3", 00:14:18.813 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 2048, 00:14:18.813 "data_size": 63488 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": "BaseBdev4", 00:14:18.813 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 2048, 00:14:18.813 "data_size": 63488 00:14:18.813 } 00:14:18.813 ] 00:14:18.813 }' 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.813 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.073 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.073 09:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.073 [2024-12-12 09:27:52.934084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:19.333 [2024-12-12 09:27:53.269615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:19.333 [2024-12-12 09:27:53.271282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:19.592 [2024-12-12 09:27:53.473749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:19.850 103.50 IOPS, 310.50 MiB/s [2024-12-12T09:27:53.873Z] [2024-12-12 09:27:53.794251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.850 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.851 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.851 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.110 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.110 "name": "raid_bdev1", 00:14:20.110 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:20.110 "strip_size_kb": 0, 00:14:20.110 "state": "online", 00:14:20.110 "raid_level": "raid1", 00:14:20.110 "superblock": true, 00:14:20.110 "num_base_bdevs": 4, 00:14:20.110 "num_base_bdevs_discovered": 3, 00:14:20.110 "num_base_bdevs_operational": 3, 00:14:20.110 "process": { 00:14:20.110 "type": "rebuild", 00:14:20.110 "target": "spare", 00:14:20.110 "progress": { 00:14:20.110 "blocks": 53248, 00:14:20.110 "percent": 83 00:14:20.110 } 00:14:20.110 }, 00:14:20.110 "base_bdevs_list": [ 00:14:20.110 { 00:14:20.110 "name": "spare", 00:14:20.110 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:20.110 "is_configured": true, 00:14:20.110 "data_offset": 2048, 00:14:20.110 "data_size": 63488 00:14:20.110 }, 00:14:20.110 { 00:14:20.110 "name": null, 00:14:20.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.110 "is_configured": false, 00:14:20.110 "data_offset": 0, 00:14:20.110 "data_size": 63488 00:14:20.110 }, 00:14:20.110 { 00:14:20.110 "name": "BaseBdev3", 00:14:20.110 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:20.110 "is_configured": true, 00:14:20.110 "data_offset": 2048, 00:14:20.110 "data_size": 63488 00:14:20.110 }, 00:14:20.110 { 00:14:20.110 "name": "BaseBdev4", 00:14:20.110 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:20.110 "is_configured": true, 00:14:20.110 "data_offset": 2048, 00:14:20.110 "data_size": 63488 00:14:20.110 } 00:14:20.110 ] 00:14:20.110 }' 00:14:20.110 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.110 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.111 09:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.111 09:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.111 09:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.111 [2024-12-12 09:27:54.120363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:20.370 [2024-12-12 09:27:54.338875] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:20.630 [2024-12-12 09:27:54.438704] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:20.630 [2024-12-12 09:27:54.448438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.200 92.86 IOPS, 278.57 MiB/s [2024-12-12T09:27:55.223Z] 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.200 "name": "raid_bdev1", 00:14:21.200 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:21.200 "strip_size_kb": 0, 00:14:21.200 "state": "online", 00:14:21.200 "raid_level": "raid1", 00:14:21.200 "superblock": true, 00:14:21.200 "num_base_bdevs": 4, 00:14:21.200 "num_base_bdevs_discovered": 3, 00:14:21.200 "num_base_bdevs_operational": 3, 00:14:21.200 "base_bdevs_list": [ 00:14:21.200 { 00:14:21.200 "name": "spare", 00:14:21.200 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": null, 00:14:21.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.200 "is_configured": false, 00:14:21.200 "data_offset": 0, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": "BaseBdev3", 00:14:21.200 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": "BaseBdev4", 00:14:21.200 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 } 00:14:21.200 ] 00:14:21.200 }' 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.200 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.200 "name": "raid_bdev1", 00:14:21.200 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:21.200 "strip_size_kb": 0, 00:14:21.200 "state": "online", 00:14:21.200 "raid_level": "raid1", 00:14:21.200 "superblock": true, 00:14:21.200 "num_base_bdevs": 4, 00:14:21.200 "num_base_bdevs_discovered": 3, 00:14:21.200 "num_base_bdevs_operational": 3, 00:14:21.200 "base_bdevs_list": [ 00:14:21.200 { 00:14:21.200 "name": "spare", 00:14:21.200 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": null, 00:14:21.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.200 "is_configured": false, 00:14:21.200 "data_offset": 0, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": "BaseBdev3", 00:14:21.200 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 }, 00:14:21.200 { 00:14:21.200 "name": "BaseBdev4", 00:14:21.200 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:21.200 "is_configured": true, 00:14:21.200 "data_offset": 2048, 00:14:21.200 "data_size": 63488 00:14:21.200 } 00:14:21.200 ] 00:14:21.200 }' 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.459 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.459 "name": "raid_bdev1", 00:14:21.459 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:21.459 "strip_size_kb": 0, 00:14:21.459 "state": "online", 00:14:21.459 "raid_level": "raid1", 00:14:21.459 "superblock": true, 00:14:21.459 "num_base_bdevs": 4, 00:14:21.459 "num_base_bdevs_discovered": 3, 00:14:21.459 "num_base_bdevs_operational": 3, 00:14:21.459 "base_bdevs_list": [ 00:14:21.459 { 00:14:21.459 "name": "spare", 00:14:21.459 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:21.459 "is_configured": true, 00:14:21.459 "data_offset": 2048, 00:14:21.459 "data_size": 63488 00:14:21.459 }, 00:14:21.459 { 00:14:21.459 "name": null, 00:14:21.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.459 "is_configured": false, 00:14:21.459 "data_offset": 0, 00:14:21.459 "data_size": 63488 00:14:21.459 }, 00:14:21.459 { 00:14:21.459 "name": "BaseBdev3", 00:14:21.459 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:21.459 "is_configured": true, 00:14:21.459 "data_offset": 2048, 00:14:21.459 "data_size": 63488 00:14:21.459 }, 00:14:21.460 { 00:14:21.460 "name": "BaseBdev4", 00:14:21.460 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:21.460 "is_configured": true, 00:14:21.460 "data_offset": 2048, 00:14:21.460 "data_size": 63488 00:14:21.460 } 00:14:21.460 ] 00:14:21.460 }' 00:14:21.460 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.460 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.720 86.38 IOPS, 259.12 MiB/s [2024-12-12T09:27:55.743Z] 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.720 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.720 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.720 [2024-12-12 09:27:55.734415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.720 [2024-12-12 09:27:55.734496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.980 00:14:21.980 Latency(us) 00:14:21.980 [2024-12-12T09:27:56.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.980 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.980 raid_bdev1 : 8.14 85.59 256.76 0.00 0.00 15317.96 318.38 118136.51 00:14:21.980 [2024-12-12T09:27:56.003Z] =================================================================================================================== 00:14:21.980 [2024-12-12T09:27:56.003Z] Total : 85.59 256.76 0.00 0.00 15317.96 318.38 118136.51 00:14:21.980 [2024-12-12 09:27:55.782376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.980 [2024-12-12 09:27:55.782495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.980 [2024-12-12 09:27:55.782631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.980 [2024-12-12 09:27:55.782685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.980 { 00:14:21.980 "results": [ 00:14:21.980 { 00:14:21.980 "job": "raid_bdev1", 00:14:21.980 "core_mask": "0x1", 00:14:21.980 "workload": "randrw", 00:14:21.980 "percentage": 50, 00:14:21.980 "status": "finished", 00:14:21.980 "queue_depth": 2, 00:14:21.980 "io_size": 3145728, 00:14:21.980 "runtime": 8.143668, 00:14:21.980 "iops": 85.5879684682627, 00:14:21.980 "mibps": 256.7639054047881, 00:14:21.980 "io_failed": 0, 00:14:21.980 "io_timeout": 0, 00:14:21.980 "avg_latency_us": 15317.96424602006, 00:14:21.980 "min_latency_us": 318.37903930131006, 00:14:21.980 "max_latency_us": 118136.51004366812 00:14:21.980 } 00:14:21.980 ], 00:14:21.980 "core_count": 1 00:14:21.980 } 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.980 09:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:22.240 /dev/nbd0 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.241 1+0 records in 00:14:22.241 1+0 records out 00:14:22.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422501 s, 9.7 MB/s 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.241 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:22.501 /dev/nbd1 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.501 1+0 records in 00:14:22.501 1+0 records out 00:14:22.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476126 s, 8.6 MB/s 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.501 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.762 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:23.022 /dev/nbd1 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.022 1+0 records in 00:14:23.022 1+0 records out 00:14:23.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364098 s, 11.2 MB/s 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.022 09:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.287 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.556 [2024-12-12 09:27:57.531104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.556 [2024-12-12 09:27:57.531233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.556 [2024-12-12 09:27:57.531276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:23.556 [2024-12-12 09:27:57.531311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.556 [2024-12-12 09:27:57.533778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.556 [2024-12-12 09:27:57.533867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.556 [2024-12-12 09:27:57.534018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:23.556 [2024-12-12 09:27:57.534098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.556 [2024-12-12 09:27:57.534307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.556 [2024-12-12 09:27:57.534453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.556 spare 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.556 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.827 [2024-12-12 09:27:57.634392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:23.827 [2024-12-12 09:27:57.634452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:23.827 [2024-12-12 09:27:57.634786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:23.827 [2024-12-12 09:27:57.635029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:23.827 [2024-12-12 09:27:57.635078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:23.827 [2024-12-12 09:27:57.635290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.827 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.827 "name": "raid_bdev1", 00:14:23.827 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:23.827 "strip_size_kb": 0, 00:14:23.827 "state": "online", 00:14:23.827 "raid_level": "raid1", 00:14:23.827 "superblock": true, 00:14:23.827 "num_base_bdevs": 4, 00:14:23.827 "num_base_bdevs_discovered": 3, 00:14:23.827 "num_base_bdevs_operational": 3, 00:14:23.827 "base_bdevs_list": [ 00:14:23.827 { 00:14:23.827 "name": "spare", 00:14:23.827 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:23.827 "is_configured": true, 00:14:23.827 "data_offset": 2048, 00:14:23.827 "data_size": 63488 00:14:23.827 }, 00:14:23.827 { 00:14:23.827 "name": null, 00:14:23.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.827 "is_configured": false, 00:14:23.827 "data_offset": 2048, 00:14:23.827 "data_size": 63488 00:14:23.827 }, 00:14:23.827 { 00:14:23.827 "name": "BaseBdev3", 00:14:23.827 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:23.827 "is_configured": true, 00:14:23.827 "data_offset": 2048, 00:14:23.827 "data_size": 63488 00:14:23.827 }, 00:14:23.827 { 00:14:23.828 "name": "BaseBdev4", 00:14:23.828 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:23.828 "is_configured": true, 00:14:23.828 "data_offset": 2048, 00:14:23.828 "data_size": 63488 00:14:23.828 } 00:14:23.828 ] 00:14:23.828 }' 00:14:23.828 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.828 09:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.095 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.355 "name": "raid_bdev1", 00:14:24.355 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:24.355 "strip_size_kb": 0, 00:14:24.355 "state": "online", 00:14:24.355 "raid_level": "raid1", 00:14:24.355 "superblock": true, 00:14:24.355 "num_base_bdevs": 4, 00:14:24.355 "num_base_bdevs_discovered": 3, 00:14:24.355 "num_base_bdevs_operational": 3, 00:14:24.355 "base_bdevs_list": [ 00:14:24.355 { 00:14:24.355 "name": "spare", 00:14:24.355 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:24.355 "is_configured": true, 00:14:24.355 "data_offset": 2048, 00:14:24.355 "data_size": 63488 00:14:24.355 }, 00:14:24.355 { 00:14:24.355 "name": null, 00:14:24.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.355 "is_configured": false, 00:14:24.355 "data_offset": 2048, 00:14:24.355 "data_size": 63488 00:14:24.355 }, 00:14:24.355 { 00:14:24.355 "name": "BaseBdev3", 00:14:24.355 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:24.355 "is_configured": true, 00:14:24.355 "data_offset": 2048, 00:14:24.355 "data_size": 63488 00:14:24.355 }, 00:14:24.355 { 00:14:24.355 "name": "BaseBdev4", 00:14:24.355 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:24.355 "is_configured": true, 00:14:24.355 "data_offset": 2048, 00:14:24.355 "data_size": 63488 00:14:24.355 } 00:14:24.355 ] 00:14:24.355 }' 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.355 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.355 [2024-12-12 09:27:58.278226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.356 "name": "raid_bdev1", 00:14:24.356 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:24.356 "strip_size_kb": 0, 00:14:24.356 "state": "online", 00:14:24.356 "raid_level": "raid1", 00:14:24.356 "superblock": true, 00:14:24.356 "num_base_bdevs": 4, 00:14:24.356 "num_base_bdevs_discovered": 2, 00:14:24.356 "num_base_bdevs_operational": 2, 00:14:24.356 "base_bdevs_list": [ 00:14:24.356 { 00:14:24.356 "name": null, 00:14:24.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.356 "is_configured": false, 00:14:24.356 "data_offset": 0, 00:14:24.356 "data_size": 63488 00:14:24.356 }, 00:14:24.356 { 00:14:24.356 "name": null, 00:14:24.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.356 "is_configured": false, 00:14:24.356 "data_offset": 2048, 00:14:24.356 "data_size": 63488 00:14:24.356 }, 00:14:24.356 { 00:14:24.356 "name": "BaseBdev3", 00:14:24.356 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:24.356 "is_configured": true, 00:14:24.356 "data_offset": 2048, 00:14:24.356 "data_size": 63488 00:14:24.356 }, 00:14:24.356 { 00:14:24.356 "name": "BaseBdev4", 00:14:24.356 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:24.356 "is_configured": true, 00:14:24.356 "data_offset": 2048, 00:14:24.356 "data_size": 63488 00:14:24.356 } 00:14:24.356 ] 00:14:24.356 }' 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.356 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.926 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.926 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.926 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.926 [2024-12-12 09:27:58.737564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.926 [2024-12-12 09:27:58.737698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:24.926 [2024-12-12 09:27:58.737712] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:24.926 [2024-12-12 09:27:58.737749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.926 [2024-12-12 09:27:58.752494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:24.926 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.926 09:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:24.926 [2024-12-12 09:27:58.754645] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.867 "name": "raid_bdev1", 00:14:25.867 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:25.867 "strip_size_kb": 0, 00:14:25.867 "state": "online", 00:14:25.867 "raid_level": "raid1", 00:14:25.867 "superblock": true, 00:14:25.867 "num_base_bdevs": 4, 00:14:25.867 "num_base_bdevs_discovered": 3, 00:14:25.867 "num_base_bdevs_operational": 3, 00:14:25.867 "process": { 00:14:25.867 "type": "rebuild", 00:14:25.867 "target": "spare", 00:14:25.867 "progress": { 00:14:25.867 "blocks": 20480, 00:14:25.867 "percent": 32 00:14:25.867 } 00:14:25.867 }, 00:14:25.867 "base_bdevs_list": [ 00:14:25.867 { 00:14:25.867 "name": "spare", 00:14:25.867 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:25.867 "is_configured": true, 00:14:25.867 "data_offset": 2048, 00:14:25.867 "data_size": 63488 00:14:25.867 }, 00:14:25.867 { 00:14:25.867 "name": null, 00:14:25.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.867 "is_configured": false, 00:14:25.867 "data_offset": 2048, 00:14:25.867 "data_size": 63488 00:14:25.867 }, 00:14:25.867 { 00:14:25.867 "name": "BaseBdev3", 00:14:25.867 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:25.867 "is_configured": true, 00:14:25.867 "data_offset": 2048, 00:14:25.867 "data_size": 63488 00:14:25.867 }, 00:14:25.867 { 00:14:25.867 "name": "BaseBdev4", 00:14:25.867 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:25.867 "is_configured": true, 00:14:25.867 "data_offset": 2048, 00:14:25.867 "data_size": 63488 00:14:25.867 } 00:14:25.867 ] 00:14:25.867 }' 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.867 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.127 [2024-12-12 09:27:59.913674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.127 [2024-12-12 09:27:59.963199] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.127 [2024-12-12 09:27:59.963325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.127 [2024-12-12 09:27:59.963347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.127 [2024-12-12 09:27:59.963355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.127 09:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.127 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.127 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.127 "name": "raid_bdev1", 00:14:26.127 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:26.127 "strip_size_kb": 0, 00:14:26.127 "state": "online", 00:14:26.127 "raid_level": "raid1", 00:14:26.127 "superblock": true, 00:14:26.127 "num_base_bdevs": 4, 00:14:26.127 "num_base_bdevs_discovered": 2, 00:14:26.127 "num_base_bdevs_operational": 2, 00:14:26.127 "base_bdevs_list": [ 00:14:26.127 { 00:14:26.127 "name": null, 00:14:26.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.127 "is_configured": false, 00:14:26.127 "data_offset": 0, 00:14:26.127 "data_size": 63488 00:14:26.127 }, 00:14:26.127 { 00:14:26.127 "name": null, 00:14:26.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.127 "is_configured": false, 00:14:26.127 "data_offset": 2048, 00:14:26.127 "data_size": 63488 00:14:26.127 }, 00:14:26.127 { 00:14:26.127 "name": "BaseBdev3", 00:14:26.127 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:26.127 "is_configured": true, 00:14:26.127 "data_offset": 2048, 00:14:26.127 "data_size": 63488 00:14:26.127 }, 00:14:26.127 { 00:14:26.127 "name": "BaseBdev4", 00:14:26.127 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:26.127 "is_configured": true, 00:14:26.127 "data_offset": 2048, 00:14:26.127 "data_size": 63488 00:14:26.127 } 00:14:26.127 ] 00:14:26.127 }' 00:14:26.127 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.127 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.697 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.697 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.697 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.697 [2024-12-12 09:28:00.444396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.697 [2024-12-12 09:28:00.444494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.697 [2024-12-12 09:28:00.444565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:26.697 [2024-12-12 09:28:00.444597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.697 [2024-12-12 09:28:00.445141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.697 [2024-12-12 09:28:00.445197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.697 [2024-12-12 09:28:00.445310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.697 [2024-12-12 09:28:00.445349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:26.697 [2024-12-12 09:28:00.445391] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.697 [2024-12-12 09:28:00.445443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.697 [2024-12-12 09:28:00.459629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:26.697 spare 00:14:26.697 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.697 09:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:26.697 [2024-12-12 09:28:00.461755] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.637 "name": "raid_bdev1", 00:14:27.637 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:27.637 "strip_size_kb": 0, 00:14:27.637 "state": "online", 00:14:27.637 "raid_level": "raid1", 00:14:27.637 "superblock": true, 00:14:27.637 "num_base_bdevs": 4, 00:14:27.637 "num_base_bdevs_discovered": 3, 00:14:27.637 "num_base_bdevs_operational": 3, 00:14:27.637 "process": { 00:14:27.637 "type": "rebuild", 00:14:27.637 "target": "spare", 00:14:27.637 "progress": { 00:14:27.637 "blocks": 20480, 00:14:27.637 "percent": 32 00:14:27.637 } 00:14:27.637 }, 00:14:27.637 "base_bdevs_list": [ 00:14:27.637 { 00:14:27.637 "name": "spare", 00:14:27.637 "uuid": "3527be42-8464-57de-8c97-e69f7712cd33", 00:14:27.637 "is_configured": true, 00:14:27.637 "data_offset": 2048, 00:14:27.637 "data_size": 63488 00:14:27.637 }, 00:14:27.637 { 00:14:27.637 "name": null, 00:14:27.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.637 "is_configured": false, 00:14:27.637 "data_offset": 2048, 00:14:27.637 "data_size": 63488 00:14:27.637 }, 00:14:27.637 { 00:14:27.637 "name": "BaseBdev3", 00:14:27.637 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:27.637 "is_configured": true, 00:14:27.637 "data_offset": 2048, 00:14:27.637 "data_size": 63488 00:14:27.637 }, 00:14:27.637 { 00:14:27.637 "name": "BaseBdev4", 00:14:27.637 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:27.637 "is_configured": true, 00:14:27.637 "data_offset": 2048, 00:14:27.637 "data_size": 63488 00:14:27.637 } 00:14:27.637 ] 00:14:27.637 }' 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.637 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.637 [2024-12-12 09:28:01.622203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.897 [2024-12-12 09:28:01.670050] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.897 [2024-12-12 09:28:01.670109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.897 [2024-12-12 09:28:01.670124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.897 [2024-12-12 09:28:01.670134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.897 "name": "raid_bdev1", 00:14:27.897 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:27.897 "strip_size_kb": 0, 00:14:27.897 "state": "online", 00:14:27.897 "raid_level": "raid1", 00:14:27.897 "superblock": true, 00:14:27.897 "num_base_bdevs": 4, 00:14:27.897 "num_base_bdevs_discovered": 2, 00:14:27.897 "num_base_bdevs_operational": 2, 00:14:27.897 "base_bdevs_list": [ 00:14:27.897 { 00:14:27.897 "name": null, 00:14:27.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.897 "is_configured": false, 00:14:27.897 "data_offset": 0, 00:14:27.897 "data_size": 63488 00:14:27.897 }, 00:14:27.897 { 00:14:27.897 "name": null, 00:14:27.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.897 "is_configured": false, 00:14:27.897 "data_offset": 2048, 00:14:27.897 "data_size": 63488 00:14:27.897 }, 00:14:27.897 { 00:14:27.897 "name": "BaseBdev3", 00:14:27.897 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:27.897 "is_configured": true, 00:14:27.897 "data_offset": 2048, 00:14:27.897 "data_size": 63488 00:14:27.897 }, 00:14:27.897 { 00:14:27.897 "name": "BaseBdev4", 00:14:27.897 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:27.897 "is_configured": true, 00:14:27.897 "data_offset": 2048, 00:14:27.897 "data_size": 63488 00:14:27.897 } 00:14:27.897 ] 00:14:27.897 }' 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.897 09:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.158 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.418 "name": "raid_bdev1", 00:14:28.418 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:28.418 "strip_size_kb": 0, 00:14:28.418 "state": "online", 00:14:28.418 "raid_level": "raid1", 00:14:28.418 "superblock": true, 00:14:28.418 "num_base_bdevs": 4, 00:14:28.418 "num_base_bdevs_discovered": 2, 00:14:28.418 "num_base_bdevs_operational": 2, 00:14:28.418 "base_bdevs_list": [ 00:14:28.418 { 00:14:28.418 "name": null, 00:14:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.418 "is_configured": false, 00:14:28.418 "data_offset": 0, 00:14:28.418 "data_size": 63488 00:14:28.418 }, 00:14:28.418 { 00:14:28.418 "name": null, 00:14:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.418 "is_configured": false, 00:14:28.418 "data_offset": 2048, 00:14:28.418 "data_size": 63488 00:14:28.418 }, 00:14:28.418 { 00:14:28.418 "name": "BaseBdev3", 00:14:28.418 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:28.418 "is_configured": true, 00:14:28.418 "data_offset": 2048, 00:14:28.418 "data_size": 63488 00:14:28.418 }, 00:14:28.418 { 00:14:28.418 "name": "BaseBdev4", 00:14:28.418 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:28.418 "is_configured": true, 00:14:28.418 "data_offset": 2048, 00:14:28.418 "data_size": 63488 00:14:28.418 } 00:14:28.418 ] 00:14:28.418 }' 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.418 [2024-12-12 09:28:02.333771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.418 [2024-12-12 09:28:02.333827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.418 [2024-12-12 09:28:02.333848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:28.418 [2024-12-12 09:28:02.333860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.418 [2024-12-12 09:28:02.334366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.418 [2024-12-12 09:28:02.334395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.418 [2024-12-12 09:28:02.334473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:28.418 [2024-12-12 09:28:02.334495] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:28.418 [2024-12-12 09:28:02.334502] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:28.418 [2024-12-12 09:28:02.334518] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:28.418 BaseBdev1 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.418 09:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.358 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.618 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.618 "name": "raid_bdev1", 00:14:29.618 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:29.618 "strip_size_kb": 0, 00:14:29.618 "state": "online", 00:14:29.618 "raid_level": "raid1", 00:14:29.618 "superblock": true, 00:14:29.618 "num_base_bdevs": 4, 00:14:29.618 "num_base_bdevs_discovered": 2, 00:14:29.618 "num_base_bdevs_operational": 2, 00:14:29.618 "base_bdevs_list": [ 00:14:29.618 { 00:14:29.618 "name": null, 00:14:29.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.618 "is_configured": false, 00:14:29.618 "data_offset": 0, 00:14:29.618 "data_size": 63488 00:14:29.618 }, 00:14:29.618 { 00:14:29.618 "name": null, 00:14:29.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.618 "is_configured": false, 00:14:29.618 "data_offset": 2048, 00:14:29.618 "data_size": 63488 00:14:29.618 }, 00:14:29.618 { 00:14:29.618 "name": "BaseBdev3", 00:14:29.618 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:29.618 "is_configured": true, 00:14:29.618 "data_offset": 2048, 00:14:29.618 "data_size": 63488 00:14:29.618 }, 00:14:29.618 { 00:14:29.618 "name": "BaseBdev4", 00:14:29.618 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:29.618 "is_configured": true, 00:14:29.618 "data_offset": 2048, 00:14:29.618 "data_size": 63488 00:14:29.618 } 00:14:29.618 ] 00:14:29.618 }' 00:14:29.618 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.618 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.878 "name": "raid_bdev1", 00:14:29.878 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:29.878 "strip_size_kb": 0, 00:14:29.878 "state": "online", 00:14:29.878 "raid_level": "raid1", 00:14:29.878 "superblock": true, 00:14:29.878 "num_base_bdevs": 4, 00:14:29.878 "num_base_bdevs_discovered": 2, 00:14:29.878 "num_base_bdevs_operational": 2, 00:14:29.878 "base_bdevs_list": [ 00:14:29.878 { 00:14:29.878 "name": null, 00:14:29.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.878 "is_configured": false, 00:14:29.878 "data_offset": 0, 00:14:29.878 "data_size": 63488 00:14:29.878 }, 00:14:29.878 { 00:14:29.878 "name": null, 00:14:29.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.878 "is_configured": false, 00:14:29.878 "data_offset": 2048, 00:14:29.878 "data_size": 63488 00:14:29.878 }, 00:14:29.878 { 00:14:29.878 "name": "BaseBdev3", 00:14:29.878 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:29.878 "is_configured": true, 00:14:29.878 "data_offset": 2048, 00:14:29.878 "data_size": 63488 00:14:29.878 }, 00:14:29.878 { 00:14:29.878 "name": "BaseBdev4", 00:14:29.878 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:29.878 "is_configured": true, 00:14:29.878 "data_offset": 2048, 00:14:29.878 "data_size": 63488 00:14:29.878 } 00:14:29.878 ] 00:14:29.878 }' 00:14:29.878 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.138 [2024-12-12 09:28:03.987269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.138 [2024-12-12 09:28:03.987402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:30.138 [2024-12-12 09:28:03.987414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.138 request: 00:14:30.138 { 00:14:30.138 "base_bdev": "BaseBdev1", 00:14:30.138 "raid_bdev": "raid_bdev1", 00:14:30.138 "method": "bdev_raid_add_base_bdev", 00:14:30.138 "req_id": 1 00:14:30.138 } 00:14:30.138 Got JSON-RPC error response 00:14:30.138 response: 00:14:30.138 { 00:14:30.138 "code": -22, 00:14:30.138 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:30.138 } 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.138 09:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.078 "name": "raid_bdev1", 00:14:31.078 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:31.078 "strip_size_kb": 0, 00:14:31.078 "state": "online", 00:14:31.078 "raid_level": "raid1", 00:14:31.078 "superblock": true, 00:14:31.078 "num_base_bdevs": 4, 00:14:31.078 "num_base_bdevs_discovered": 2, 00:14:31.078 "num_base_bdevs_operational": 2, 00:14:31.078 "base_bdevs_list": [ 00:14:31.078 { 00:14:31.078 "name": null, 00:14:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.078 "is_configured": false, 00:14:31.078 "data_offset": 0, 00:14:31.078 "data_size": 63488 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "name": null, 00:14:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.078 "is_configured": false, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "name": "BaseBdev3", 00:14:31.078 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:31.078 "is_configured": true, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "name": "BaseBdev4", 00:14:31.078 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:31.078 "is_configured": true, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 } 00:14:31.078 ] 00:14:31.078 }' 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.078 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.648 "name": "raid_bdev1", 00:14:31.648 "uuid": "5d615d31-43f5-4d45-89ef-bf21769e7456", 00:14:31.648 "strip_size_kb": 0, 00:14:31.648 "state": "online", 00:14:31.648 "raid_level": "raid1", 00:14:31.648 "superblock": true, 00:14:31.648 "num_base_bdevs": 4, 00:14:31.648 "num_base_bdevs_discovered": 2, 00:14:31.648 "num_base_bdevs_operational": 2, 00:14:31.648 "base_bdevs_list": [ 00:14:31.648 { 00:14:31.648 "name": null, 00:14:31.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.648 "is_configured": false, 00:14:31.648 "data_offset": 0, 00:14:31.648 "data_size": 63488 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": null, 00:14:31.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.648 "is_configured": false, 00:14:31.648 "data_offset": 2048, 00:14:31.648 "data_size": 63488 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": "BaseBdev3", 00:14:31.648 "uuid": "1fe2e75d-24c7-536c-b578-22b269c2eaff", 00:14:31.648 "is_configured": true, 00:14:31.648 "data_offset": 2048, 00:14:31.648 "data_size": 63488 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": "BaseBdev4", 00:14:31.648 "uuid": "fec33f0a-8aab-5a3a-bdf8-66e2735aaddf", 00:14:31.648 "is_configured": true, 00:14:31.648 "data_offset": 2048, 00:14:31.648 "data_size": 63488 00:14:31.648 } 00:14:31.648 ] 00:14:31.648 }' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 80294 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 80294 ']' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 80294 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.648 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80294 00:14:31.909 killing process with pid 80294 00:14:31.909 Received shutdown signal, test time was about 18.085495 seconds 00:14:31.909 00:14:31.909 Latency(us) 00:14:31.909 [2024-12-12T09:28:05.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.909 [2024-12-12T09:28:05.932Z] =================================================================================================================== 00:14:31.909 [2024-12-12T09:28:05.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.909 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.909 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.909 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80294' 00:14:31.909 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 80294 00:14:31.909 [2024-12-12 09:28:05.684946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.909 [2024-12-12 09:28:05.685063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.909 [2024-12-12 09:28:05.685132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.909 [2024-12-12 09:28:05.685142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:31.909 09:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 80294 00:14:32.168 [2024-12-12 09:28:06.120195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.550 09:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:33.550 00:14:33.550 real 0m21.656s 00:14:33.550 user 0m28.228s 00:14:33.550 sys 0m2.846s 00:14:33.550 09:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.550 09:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.550 ************************************ 00:14:33.550 END TEST raid_rebuild_test_sb_io 00:14:33.550 ************************************ 00:14:33.550 09:28:07 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:33.550 09:28:07 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:33.550 09:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.550 09:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.550 09:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.550 ************************************ 00:14:33.550 START TEST raid5f_state_function_test 00:14:33.550 ************************************ 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81020 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81020' 00:14:33.550 Process raid pid: 81020 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81020 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81020 ']' 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.550 09:28:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.550 [2024-12-12 09:28:07.541705] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:33.550 [2024-12-12 09:28:07.541980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.810 [2024-12-12 09:28:07.721177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.069 [2024-12-12 09:28:07.855359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.329 [2024-12-12 09:28:08.100180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.330 [2024-12-12 09:28:08.100297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.589 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.589 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 [2024-12-12 09:28:08.371135] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.590 [2024-12-12 09:28:08.371193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.590 [2024-12-12 09:28:08.371203] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.590 [2024-12-12 09:28:08.371212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.590 [2024-12-12 09:28:08.371218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.590 [2024-12-12 09:28:08.371227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.590 "name": "Existed_Raid", 00:14:34.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.590 "strip_size_kb": 64, 00:14:34.590 "state": "configuring", 00:14:34.590 "raid_level": "raid5f", 00:14:34.590 "superblock": false, 00:14:34.590 "num_base_bdevs": 3, 00:14:34.590 "num_base_bdevs_discovered": 0, 00:14:34.590 "num_base_bdevs_operational": 3, 00:14:34.590 "base_bdevs_list": [ 00:14:34.590 { 00:14:34.590 "name": "BaseBdev1", 00:14:34.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.590 "is_configured": false, 00:14:34.590 "data_offset": 0, 00:14:34.590 "data_size": 0 00:14:34.590 }, 00:14:34.590 { 00:14:34.590 "name": "BaseBdev2", 00:14:34.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.590 "is_configured": false, 00:14:34.590 "data_offset": 0, 00:14:34.590 "data_size": 0 00:14:34.590 }, 00:14:34.590 { 00:14:34.590 "name": "BaseBdev3", 00:14:34.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.590 "is_configured": false, 00:14:34.590 "data_offset": 0, 00:14:34.590 "data_size": 0 00:14:34.590 } 00:14:34.590 ] 00:14:34.590 }' 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.590 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.849 [2024-12-12 09:28:08.846217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.849 [2024-12-12 09:28:08.846295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.849 [2024-12-12 09:28:08.858217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.849 [2024-12-12 09:28:08.858307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.849 [2024-12-12 09:28:08.858333] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.849 [2024-12-12 09:28:08.858355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.849 [2024-12-12 09:28:08.858373] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.849 [2024-12-12 09:28:08.858394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.849 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.109 [2024-12-12 09:28:08.911823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.109 BaseBdev1 00:14:35.109 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.109 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:35.109 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:35.109 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.110 [ 00:14:35.110 { 00:14:35.110 "name": "BaseBdev1", 00:14:35.110 "aliases": [ 00:14:35.110 "42f67a18-3422-4cb7-9da0-158d32630258" 00:14:35.110 ], 00:14:35.110 "product_name": "Malloc disk", 00:14:35.110 "block_size": 512, 00:14:35.110 "num_blocks": 65536, 00:14:35.110 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:35.110 "assigned_rate_limits": { 00:14:35.110 "rw_ios_per_sec": 0, 00:14:35.110 "rw_mbytes_per_sec": 0, 00:14:35.110 "r_mbytes_per_sec": 0, 00:14:35.110 "w_mbytes_per_sec": 0 00:14:35.110 }, 00:14:35.110 "claimed": true, 00:14:35.110 "claim_type": "exclusive_write", 00:14:35.110 "zoned": false, 00:14:35.110 "supported_io_types": { 00:14:35.110 "read": true, 00:14:35.110 "write": true, 00:14:35.110 "unmap": true, 00:14:35.110 "flush": true, 00:14:35.110 "reset": true, 00:14:35.110 "nvme_admin": false, 00:14:35.110 "nvme_io": false, 00:14:35.110 "nvme_io_md": false, 00:14:35.110 "write_zeroes": true, 00:14:35.110 "zcopy": true, 00:14:35.110 "get_zone_info": false, 00:14:35.110 "zone_management": false, 00:14:35.110 "zone_append": false, 00:14:35.110 "compare": false, 00:14:35.110 "compare_and_write": false, 00:14:35.110 "abort": true, 00:14:35.110 "seek_hole": false, 00:14:35.110 "seek_data": false, 00:14:35.110 "copy": true, 00:14:35.110 "nvme_iov_md": false 00:14:35.110 }, 00:14:35.110 "memory_domains": [ 00:14:35.110 { 00:14:35.110 "dma_device_id": "system", 00:14:35.110 "dma_device_type": 1 00:14:35.110 }, 00:14:35.110 { 00:14:35.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.110 "dma_device_type": 2 00:14:35.110 } 00:14:35.110 ], 00:14:35.110 "driver_specific": {} 00:14:35.110 } 00:14:35.110 ] 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.110 09:28:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.110 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.110 "name": "Existed_Raid", 00:14:35.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.110 "strip_size_kb": 64, 00:14:35.110 "state": "configuring", 00:14:35.110 "raid_level": "raid5f", 00:14:35.110 "superblock": false, 00:14:35.110 "num_base_bdevs": 3, 00:14:35.110 "num_base_bdevs_discovered": 1, 00:14:35.110 "num_base_bdevs_operational": 3, 00:14:35.110 "base_bdevs_list": [ 00:14:35.110 { 00:14:35.110 "name": "BaseBdev1", 00:14:35.110 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:35.110 "is_configured": true, 00:14:35.110 "data_offset": 0, 00:14:35.110 "data_size": 65536 00:14:35.110 }, 00:14:35.110 { 00:14:35.110 "name": "BaseBdev2", 00:14:35.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.110 "is_configured": false, 00:14:35.110 "data_offset": 0, 00:14:35.110 "data_size": 0 00:14:35.110 }, 00:14:35.110 { 00:14:35.110 "name": "BaseBdev3", 00:14:35.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.110 "is_configured": false, 00:14:35.110 "data_offset": 0, 00:14:35.110 "data_size": 0 00:14:35.110 } 00:14:35.110 ] 00:14:35.110 }' 00:14:35.110 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.110 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.370 [2024-12-12 09:28:09.379018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.370 [2024-12-12 09:28:09.379055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.370 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.370 [2024-12-12 09:28:09.391055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.631 [2024-12-12 09:28:09.393145] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.631 [2024-12-12 09:28:09.393218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.631 [2024-12-12 09:28:09.393263] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.631 [2024-12-12 09:28:09.393285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.631 "name": "Existed_Raid", 00:14:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.631 "strip_size_kb": 64, 00:14:35.631 "state": "configuring", 00:14:35.631 "raid_level": "raid5f", 00:14:35.631 "superblock": false, 00:14:35.631 "num_base_bdevs": 3, 00:14:35.631 "num_base_bdevs_discovered": 1, 00:14:35.631 "num_base_bdevs_operational": 3, 00:14:35.631 "base_bdevs_list": [ 00:14:35.631 { 00:14:35.631 "name": "BaseBdev1", 00:14:35.631 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:35.631 "is_configured": true, 00:14:35.631 "data_offset": 0, 00:14:35.631 "data_size": 65536 00:14:35.631 }, 00:14:35.631 { 00:14:35.631 "name": "BaseBdev2", 00:14:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.631 "is_configured": false, 00:14:35.631 "data_offset": 0, 00:14:35.631 "data_size": 0 00:14:35.631 }, 00:14:35.631 { 00:14:35.631 "name": "BaseBdev3", 00:14:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.631 "is_configured": false, 00:14:35.631 "data_offset": 0, 00:14:35.631 "data_size": 0 00:14:35.631 } 00:14:35.631 ] 00:14:35.631 }' 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.631 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.891 [2024-12-12 09:28:09.907715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.891 BaseBdev2 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.891 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.151 [ 00:14:36.151 { 00:14:36.151 "name": "BaseBdev2", 00:14:36.151 "aliases": [ 00:14:36.151 "1db12447-c48a-4ef6-b491-f77c537274d3" 00:14:36.151 ], 00:14:36.151 "product_name": "Malloc disk", 00:14:36.151 "block_size": 512, 00:14:36.151 "num_blocks": 65536, 00:14:36.151 "uuid": "1db12447-c48a-4ef6-b491-f77c537274d3", 00:14:36.151 "assigned_rate_limits": { 00:14:36.151 "rw_ios_per_sec": 0, 00:14:36.151 "rw_mbytes_per_sec": 0, 00:14:36.151 "r_mbytes_per_sec": 0, 00:14:36.151 "w_mbytes_per_sec": 0 00:14:36.151 }, 00:14:36.151 "claimed": true, 00:14:36.151 "claim_type": "exclusive_write", 00:14:36.151 "zoned": false, 00:14:36.151 "supported_io_types": { 00:14:36.151 "read": true, 00:14:36.151 "write": true, 00:14:36.151 "unmap": true, 00:14:36.151 "flush": true, 00:14:36.151 "reset": true, 00:14:36.151 "nvme_admin": false, 00:14:36.151 "nvme_io": false, 00:14:36.151 "nvme_io_md": false, 00:14:36.151 "write_zeroes": true, 00:14:36.151 "zcopy": true, 00:14:36.151 "get_zone_info": false, 00:14:36.151 "zone_management": false, 00:14:36.151 "zone_append": false, 00:14:36.151 "compare": false, 00:14:36.151 "compare_and_write": false, 00:14:36.151 "abort": true, 00:14:36.151 "seek_hole": false, 00:14:36.151 "seek_data": false, 00:14:36.151 "copy": true, 00:14:36.151 "nvme_iov_md": false 00:14:36.151 }, 00:14:36.151 "memory_domains": [ 00:14:36.151 { 00:14:36.151 "dma_device_id": "system", 00:14:36.151 "dma_device_type": 1 00:14:36.151 }, 00:14:36.151 { 00:14:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.151 "dma_device_type": 2 00:14:36.151 } 00:14:36.151 ], 00:14:36.151 "driver_specific": {} 00:14:36.151 } 00:14:36.151 ] 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.151 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.152 "name": "Existed_Raid", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "strip_size_kb": 64, 00:14:36.152 "state": "configuring", 00:14:36.152 "raid_level": "raid5f", 00:14:36.152 "superblock": false, 00:14:36.152 "num_base_bdevs": 3, 00:14:36.152 "num_base_bdevs_discovered": 2, 00:14:36.152 "num_base_bdevs_operational": 3, 00:14:36.152 "base_bdevs_list": [ 00:14:36.152 { 00:14:36.152 "name": "BaseBdev1", 00:14:36.152 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:36.152 "is_configured": true, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 65536 00:14:36.152 }, 00:14:36.152 { 00:14:36.152 "name": "BaseBdev2", 00:14:36.152 "uuid": "1db12447-c48a-4ef6-b491-f77c537274d3", 00:14:36.152 "is_configured": true, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 65536 00:14:36.152 }, 00:14:36.152 { 00:14:36.152 "name": "BaseBdev3", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "is_configured": false, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 0 00:14:36.152 } 00:14:36.152 ] 00:14:36.152 }' 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.152 09:28:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.412 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.412 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.412 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.672 [2024-12-12 09:28:10.435612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.672 [2024-12-12 09:28:10.435685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:36.672 [2024-12-12 09:28:10.435704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:36.672 [2024-12-12 09:28:10.436422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.672 [2024-12-12 09:28:10.441493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:36.672 [2024-12-12 09:28:10.441552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:36.672 [2024-12-12 09:28:10.441930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.672 BaseBdev3 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.672 [ 00:14:36.672 { 00:14:36.672 "name": "BaseBdev3", 00:14:36.672 "aliases": [ 00:14:36.672 "f2db2fe1-5b4e-48f1-a924-a64f4424fad2" 00:14:36.672 ], 00:14:36.672 "product_name": "Malloc disk", 00:14:36.672 "block_size": 512, 00:14:36.672 "num_blocks": 65536, 00:14:36.672 "uuid": "f2db2fe1-5b4e-48f1-a924-a64f4424fad2", 00:14:36.672 "assigned_rate_limits": { 00:14:36.672 "rw_ios_per_sec": 0, 00:14:36.672 "rw_mbytes_per_sec": 0, 00:14:36.672 "r_mbytes_per_sec": 0, 00:14:36.672 "w_mbytes_per_sec": 0 00:14:36.672 }, 00:14:36.672 "claimed": true, 00:14:36.672 "claim_type": "exclusive_write", 00:14:36.672 "zoned": false, 00:14:36.672 "supported_io_types": { 00:14:36.672 "read": true, 00:14:36.672 "write": true, 00:14:36.672 "unmap": true, 00:14:36.672 "flush": true, 00:14:36.672 "reset": true, 00:14:36.672 "nvme_admin": false, 00:14:36.672 "nvme_io": false, 00:14:36.672 "nvme_io_md": false, 00:14:36.672 "write_zeroes": true, 00:14:36.672 "zcopy": true, 00:14:36.672 "get_zone_info": false, 00:14:36.672 "zone_management": false, 00:14:36.672 "zone_append": false, 00:14:36.672 "compare": false, 00:14:36.672 "compare_and_write": false, 00:14:36.672 "abort": true, 00:14:36.672 "seek_hole": false, 00:14:36.672 "seek_data": false, 00:14:36.672 "copy": true, 00:14:36.672 "nvme_iov_md": false 00:14:36.672 }, 00:14:36.672 "memory_domains": [ 00:14:36.672 { 00:14:36.672 "dma_device_id": "system", 00:14:36.672 "dma_device_type": 1 00:14:36.672 }, 00:14:36.672 { 00:14:36.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.672 "dma_device_type": 2 00:14:36.672 } 00:14:36.672 ], 00:14:36.672 "driver_specific": {} 00:14:36.672 } 00:14:36.672 ] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.672 "name": "Existed_Raid", 00:14:36.672 "uuid": "aa2f16b1-5280-40f1-818d-2a0613945c08", 00:14:36.672 "strip_size_kb": 64, 00:14:36.672 "state": "online", 00:14:36.672 "raid_level": "raid5f", 00:14:36.672 "superblock": false, 00:14:36.672 "num_base_bdevs": 3, 00:14:36.672 "num_base_bdevs_discovered": 3, 00:14:36.672 "num_base_bdevs_operational": 3, 00:14:36.672 "base_bdevs_list": [ 00:14:36.672 { 00:14:36.672 "name": "BaseBdev1", 00:14:36.672 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:36.672 "is_configured": true, 00:14:36.672 "data_offset": 0, 00:14:36.672 "data_size": 65536 00:14:36.672 }, 00:14:36.672 { 00:14:36.672 "name": "BaseBdev2", 00:14:36.672 "uuid": "1db12447-c48a-4ef6-b491-f77c537274d3", 00:14:36.672 "is_configured": true, 00:14:36.672 "data_offset": 0, 00:14:36.672 "data_size": 65536 00:14:36.672 }, 00:14:36.672 { 00:14:36.672 "name": "BaseBdev3", 00:14:36.672 "uuid": "f2db2fe1-5b4e-48f1-a924-a64f4424fad2", 00:14:36.672 "is_configured": true, 00:14:36.672 "data_offset": 0, 00:14:36.672 "data_size": 65536 00:14:36.672 } 00:14:36.672 ] 00:14:36.672 }' 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.672 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.932 [2024-12-12 09:28:10.916412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.932 09:28:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.192 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.192 "name": "Existed_Raid", 00:14:37.192 "aliases": [ 00:14:37.192 "aa2f16b1-5280-40f1-818d-2a0613945c08" 00:14:37.192 ], 00:14:37.192 "product_name": "Raid Volume", 00:14:37.192 "block_size": 512, 00:14:37.192 "num_blocks": 131072, 00:14:37.192 "uuid": "aa2f16b1-5280-40f1-818d-2a0613945c08", 00:14:37.192 "assigned_rate_limits": { 00:14:37.192 "rw_ios_per_sec": 0, 00:14:37.192 "rw_mbytes_per_sec": 0, 00:14:37.192 "r_mbytes_per_sec": 0, 00:14:37.192 "w_mbytes_per_sec": 0 00:14:37.192 }, 00:14:37.192 "claimed": false, 00:14:37.192 "zoned": false, 00:14:37.192 "supported_io_types": { 00:14:37.192 "read": true, 00:14:37.193 "write": true, 00:14:37.193 "unmap": false, 00:14:37.193 "flush": false, 00:14:37.193 "reset": true, 00:14:37.193 "nvme_admin": false, 00:14:37.193 "nvme_io": false, 00:14:37.193 "nvme_io_md": false, 00:14:37.193 "write_zeroes": true, 00:14:37.193 "zcopy": false, 00:14:37.193 "get_zone_info": false, 00:14:37.193 "zone_management": false, 00:14:37.193 "zone_append": false, 00:14:37.193 "compare": false, 00:14:37.193 "compare_and_write": false, 00:14:37.193 "abort": false, 00:14:37.193 "seek_hole": false, 00:14:37.193 "seek_data": false, 00:14:37.193 "copy": false, 00:14:37.193 "nvme_iov_md": false 00:14:37.193 }, 00:14:37.193 "driver_specific": { 00:14:37.193 "raid": { 00:14:37.193 "uuid": "aa2f16b1-5280-40f1-818d-2a0613945c08", 00:14:37.193 "strip_size_kb": 64, 00:14:37.193 "state": "online", 00:14:37.193 "raid_level": "raid5f", 00:14:37.193 "superblock": false, 00:14:37.193 "num_base_bdevs": 3, 00:14:37.193 "num_base_bdevs_discovered": 3, 00:14:37.193 "num_base_bdevs_operational": 3, 00:14:37.193 "base_bdevs_list": [ 00:14:37.193 { 00:14:37.193 "name": "BaseBdev1", 00:14:37.193 "uuid": "42f67a18-3422-4cb7-9da0-158d32630258", 00:14:37.193 "is_configured": true, 00:14:37.193 "data_offset": 0, 00:14:37.193 "data_size": 65536 00:14:37.193 }, 00:14:37.193 { 00:14:37.193 "name": "BaseBdev2", 00:14:37.193 "uuid": "1db12447-c48a-4ef6-b491-f77c537274d3", 00:14:37.193 "is_configured": true, 00:14:37.193 "data_offset": 0, 00:14:37.193 "data_size": 65536 00:14:37.193 }, 00:14:37.193 { 00:14:37.193 "name": "BaseBdev3", 00:14:37.193 "uuid": "f2db2fe1-5b4e-48f1-a924-a64f4424fad2", 00:14:37.193 "is_configured": true, 00:14:37.193 "data_offset": 0, 00:14:37.193 "data_size": 65536 00:14:37.193 } 00:14:37.193 ] 00:14:37.193 } 00:14:37.193 } 00:14:37.193 }' 00:14:37.193 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.193 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:37.193 BaseBdev2 00:14:37.193 BaseBdev3' 00:14:37.193 09:28:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.193 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.193 [2024-12-12 09:28:11.151883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.453 "name": "Existed_Raid", 00:14:37.453 "uuid": "aa2f16b1-5280-40f1-818d-2a0613945c08", 00:14:37.453 "strip_size_kb": 64, 00:14:37.453 "state": "online", 00:14:37.453 "raid_level": "raid5f", 00:14:37.453 "superblock": false, 00:14:37.453 "num_base_bdevs": 3, 00:14:37.453 "num_base_bdevs_discovered": 2, 00:14:37.453 "num_base_bdevs_operational": 2, 00:14:37.453 "base_bdevs_list": [ 00:14:37.453 { 00:14:37.453 "name": null, 00:14:37.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.453 "is_configured": false, 00:14:37.453 "data_offset": 0, 00:14:37.453 "data_size": 65536 00:14:37.453 }, 00:14:37.453 { 00:14:37.453 "name": "BaseBdev2", 00:14:37.453 "uuid": "1db12447-c48a-4ef6-b491-f77c537274d3", 00:14:37.453 "is_configured": true, 00:14:37.453 "data_offset": 0, 00:14:37.453 "data_size": 65536 00:14:37.453 }, 00:14:37.453 { 00:14:37.453 "name": "BaseBdev3", 00:14:37.453 "uuid": "f2db2fe1-5b4e-48f1-a924-a64f4424fad2", 00:14:37.453 "is_configured": true, 00:14:37.453 "data_offset": 0, 00:14:37.453 "data_size": 65536 00:14:37.453 } 00:14:37.453 ] 00:14:37.453 }' 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.453 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.713 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.713 [2024-12-12 09:28:11.703561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.713 [2024-12-12 09:28:11.703685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.973 [2024-12-12 09:28:11.802830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.973 [2024-12-12 09:28:11.862765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.973 [2024-12-12 09:28:11.862815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.973 09:28:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 BaseBdev2 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 [ 00:14:38.234 { 00:14:38.234 "name": "BaseBdev2", 00:14:38.234 "aliases": [ 00:14:38.234 "1b7f1365-20e5-4518-9661-f5b8ceb83dba" 00:14:38.234 ], 00:14:38.234 "product_name": "Malloc disk", 00:14:38.234 "block_size": 512, 00:14:38.234 "num_blocks": 65536, 00:14:38.234 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:38.234 "assigned_rate_limits": { 00:14:38.234 "rw_ios_per_sec": 0, 00:14:38.234 "rw_mbytes_per_sec": 0, 00:14:38.234 "r_mbytes_per_sec": 0, 00:14:38.234 "w_mbytes_per_sec": 0 00:14:38.234 }, 00:14:38.234 "claimed": false, 00:14:38.234 "zoned": false, 00:14:38.234 "supported_io_types": { 00:14:38.234 "read": true, 00:14:38.234 "write": true, 00:14:38.234 "unmap": true, 00:14:38.234 "flush": true, 00:14:38.234 "reset": true, 00:14:38.234 "nvme_admin": false, 00:14:38.234 "nvme_io": false, 00:14:38.234 "nvme_io_md": false, 00:14:38.234 "write_zeroes": true, 00:14:38.234 "zcopy": true, 00:14:38.234 "get_zone_info": false, 00:14:38.234 "zone_management": false, 00:14:38.234 "zone_append": false, 00:14:38.234 "compare": false, 00:14:38.234 "compare_and_write": false, 00:14:38.234 "abort": true, 00:14:38.234 "seek_hole": false, 00:14:38.234 "seek_data": false, 00:14:38.234 "copy": true, 00:14:38.234 "nvme_iov_md": false 00:14:38.234 }, 00:14:38.234 "memory_domains": [ 00:14:38.234 { 00:14:38.234 "dma_device_id": "system", 00:14:38.234 "dma_device_type": 1 00:14:38.234 }, 00:14:38.234 { 00:14:38.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.234 "dma_device_type": 2 00:14:38.234 } 00:14:38.234 ], 00:14:38.234 "driver_specific": {} 00:14:38.234 } 00:14:38.234 ] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 BaseBdev3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 [ 00:14:38.234 { 00:14:38.234 "name": "BaseBdev3", 00:14:38.234 "aliases": [ 00:14:38.234 "59f6dcef-be4e-410b-a766-64eeebd4692b" 00:14:38.234 ], 00:14:38.234 "product_name": "Malloc disk", 00:14:38.234 "block_size": 512, 00:14:38.234 "num_blocks": 65536, 00:14:38.234 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:38.234 "assigned_rate_limits": { 00:14:38.234 "rw_ios_per_sec": 0, 00:14:38.234 "rw_mbytes_per_sec": 0, 00:14:38.234 "r_mbytes_per_sec": 0, 00:14:38.234 "w_mbytes_per_sec": 0 00:14:38.234 }, 00:14:38.234 "claimed": false, 00:14:38.234 "zoned": false, 00:14:38.234 "supported_io_types": { 00:14:38.234 "read": true, 00:14:38.234 "write": true, 00:14:38.234 "unmap": true, 00:14:38.234 "flush": true, 00:14:38.234 "reset": true, 00:14:38.234 "nvme_admin": false, 00:14:38.234 "nvme_io": false, 00:14:38.234 "nvme_io_md": false, 00:14:38.234 "write_zeroes": true, 00:14:38.234 "zcopy": true, 00:14:38.234 "get_zone_info": false, 00:14:38.234 "zone_management": false, 00:14:38.234 "zone_append": false, 00:14:38.234 "compare": false, 00:14:38.234 "compare_and_write": false, 00:14:38.234 "abort": true, 00:14:38.234 "seek_hole": false, 00:14:38.234 "seek_data": false, 00:14:38.234 "copy": true, 00:14:38.234 "nvme_iov_md": false 00:14:38.234 }, 00:14:38.234 "memory_domains": [ 00:14:38.234 { 00:14:38.234 "dma_device_id": "system", 00:14:38.234 "dma_device_type": 1 00:14:38.234 }, 00:14:38.234 { 00:14:38.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.234 "dma_device_type": 2 00:14:38.234 } 00:14:38.234 ], 00:14:38.234 "driver_specific": {} 00:14:38.234 } 00:14:38.234 ] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 [2024-12-12 09:28:12.192169] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.234 [2024-12-12 09:28:12.192281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.234 [2024-12-12 09:28:12.192326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.234 [2024-12-12 09:28:12.194410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.234 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.235 "name": "Existed_Raid", 00:14:38.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.235 "strip_size_kb": 64, 00:14:38.235 "state": "configuring", 00:14:38.235 "raid_level": "raid5f", 00:14:38.235 "superblock": false, 00:14:38.235 "num_base_bdevs": 3, 00:14:38.235 "num_base_bdevs_discovered": 2, 00:14:38.235 "num_base_bdevs_operational": 3, 00:14:38.235 "base_bdevs_list": [ 00:14:38.235 { 00:14:38.235 "name": "BaseBdev1", 00:14:38.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.235 "is_configured": false, 00:14:38.235 "data_offset": 0, 00:14:38.235 "data_size": 0 00:14:38.235 }, 00:14:38.235 { 00:14:38.235 "name": "BaseBdev2", 00:14:38.235 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:38.235 "is_configured": true, 00:14:38.235 "data_offset": 0, 00:14:38.235 "data_size": 65536 00:14:38.235 }, 00:14:38.235 { 00:14:38.235 "name": "BaseBdev3", 00:14:38.235 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:38.235 "is_configured": true, 00:14:38.235 "data_offset": 0, 00:14:38.235 "data_size": 65536 00:14:38.235 } 00:14:38.235 ] 00:14:38.235 }' 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.235 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.805 [2024-12-12 09:28:12.639415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.805 "name": "Existed_Raid", 00:14:38.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.805 "strip_size_kb": 64, 00:14:38.805 "state": "configuring", 00:14:38.805 "raid_level": "raid5f", 00:14:38.805 "superblock": false, 00:14:38.805 "num_base_bdevs": 3, 00:14:38.805 "num_base_bdevs_discovered": 1, 00:14:38.805 "num_base_bdevs_operational": 3, 00:14:38.805 "base_bdevs_list": [ 00:14:38.805 { 00:14:38.805 "name": "BaseBdev1", 00:14:38.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.805 "is_configured": false, 00:14:38.805 "data_offset": 0, 00:14:38.805 "data_size": 0 00:14:38.805 }, 00:14:38.805 { 00:14:38.805 "name": null, 00:14:38.805 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:38.805 "is_configured": false, 00:14:38.805 "data_offset": 0, 00:14:38.805 "data_size": 65536 00:14:38.805 }, 00:14:38.805 { 00:14:38.805 "name": "BaseBdev3", 00:14:38.805 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:38.805 "is_configured": true, 00:14:38.805 "data_offset": 0, 00:14:38.805 "data_size": 65536 00:14:38.805 } 00:14:38.805 ] 00:14:38.805 }' 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.805 09:28:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.065 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:39.065 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.065 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.065 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 [2024-12-12 09:28:13.174491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.326 BaseBdev1 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 [ 00:14:39.326 { 00:14:39.326 "name": "BaseBdev1", 00:14:39.326 "aliases": [ 00:14:39.326 "c8f28138-be46-4e70-965c-42d7bf1e1cb1" 00:14:39.326 ], 00:14:39.326 "product_name": "Malloc disk", 00:14:39.326 "block_size": 512, 00:14:39.326 "num_blocks": 65536, 00:14:39.326 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:39.326 "assigned_rate_limits": { 00:14:39.326 "rw_ios_per_sec": 0, 00:14:39.326 "rw_mbytes_per_sec": 0, 00:14:39.326 "r_mbytes_per_sec": 0, 00:14:39.326 "w_mbytes_per_sec": 0 00:14:39.326 }, 00:14:39.326 "claimed": true, 00:14:39.326 "claim_type": "exclusive_write", 00:14:39.326 "zoned": false, 00:14:39.326 "supported_io_types": { 00:14:39.326 "read": true, 00:14:39.326 "write": true, 00:14:39.326 "unmap": true, 00:14:39.326 "flush": true, 00:14:39.326 "reset": true, 00:14:39.326 "nvme_admin": false, 00:14:39.326 "nvme_io": false, 00:14:39.326 "nvme_io_md": false, 00:14:39.326 "write_zeroes": true, 00:14:39.326 "zcopy": true, 00:14:39.326 "get_zone_info": false, 00:14:39.326 "zone_management": false, 00:14:39.326 "zone_append": false, 00:14:39.326 "compare": false, 00:14:39.326 "compare_and_write": false, 00:14:39.326 "abort": true, 00:14:39.326 "seek_hole": false, 00:14:39.326 "seek_data": false, 00:14:39.326 "copy": true, 00:14:39.326 "nvme_iov_md": false 00:14:39.326 }, 00:14:39.326 "memory_domains": [ 00:14:39.326 { 00:14:39.326 "dma_device_id": "system", 00:14:39.326 "dma_device_type": 1 00:14:39.326 }, 00:14:39.326 { 00:14:39.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.326 "dma_device_type": 2 00:14:39.326 } 00:14:39.326 ], 00:14:39.326 "driver_specific": {} 00:14:39.326 } 00:14:39.326 ] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.326 "name": "Existed_Raid", 00:14:39.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.326 "strip_size_kb": 64, 00:14:39.326 "state": "configuring", 00:14:39.326 "raid_level": "raid5f", 00:14:39.326 "superblock": false, 00:14:39.326 "num_base_bdevs": 3, 00:14:39.326 "num_base_bdevs_discovered": 2, 00:14:39.326 "num_base_bdevs_operational": 3, 00:14:39.326 "base_bdevs_list": [ 00:14:39.326 { 00:14:39.326 "name": "BaseBdev1", 00:14:39.326 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:39.326 "is_configured": true, 00:14:39.326 "data_offset": 0, 00:14:39.326 "data_size": 65536 00:14:39.326 }, 00:14:39.326 { 00:14:39.326 "name": null, 00:14:39.326 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:39.326 "is_configured": false, 00:14:39.326 "data_offset": 0, 00:14:39.326 "data_size": 65536 00:14:39.326 }, 00:14:39.326 { 00:14:39.326 "name": "BaseBdev3", 00:14:39.326 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:39.326 "is_configured": true, 00:14:39.326 "data_offset": 0, 00:14:39.326 "data_size": 65536 00:14:39.326 } 00:14:39.326 ] 00:14:39.326 }' 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.326 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.897 [2024-12-12 09:28:13.749584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.897 "name": "Existed_Raid", 00:14:39.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.897 "strip_size_kb": 64, 00:14:39.897 "state": "configuring", 00:14:39.897 "raid_level": "raid5f", 00:14:39.897 "superblock": false, 00:14:39.897 "num_base_bdevs": 3, 00:14:39.897 "num_base_bdevs_discovered": 1, 00:14:39.897 "num_base_bdevs_operational": 3, 00:14:39.897 "base_bdevs_list": [ 00:14:39.897 { 00:14:39.897 "name": "BaseBdev1", 00:14:39.897 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:39.897 "is_configured": true, 00:14:39.897 "data_offset": 0, 00:14:39.897 "data_size": 65536 00:14:39.897 }, 00:14:39.897 { 00:14:39.897 "name": null, 00:14:39.897 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:39.897 "is_configured": false, 00:14:39.897 "data_offset": 0, 00:14:39.897 "data_size": 65536 00:14:39.897 }, 00:14:39.897 { 00:14:39.897 "name": null, 00:14:39.897 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:39.897 "is_configured": false, 00:14:39.897 "data_offset": 0, 00:14:39.897 "data_size": 65536 00:14:39.897 } 00:14:39.897 ] 00:14:39.897 }' 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.897 09:28:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 [2024-12-12 09:28:14.256772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.468 "name": "Existed_Raid", 00:14:40.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.468 "strip_size_kb": 64, 00:14:40.468 "state": "configuring", 00:14:40.468 "raid_level": "raid5f", 00:14:40.468 "superblock": false, 00:14:40.468 "num_base_bdevs": 3, 00:14:40.468 "num_base_bdevs_discovered": 2, 00:14:40.468 "num_base_bdevs_operational": 3, 00:14:40.468 "base_bdevs_list": [ 00:14:40.468 { 00:14:40.468 "name": "BaseBdev1", 00:14:40.468 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:40.468 "is_configured": true, 00:14:40.468 "data_offset": 0, 00:14:40.468 "data_size": 65536 00:14:40.468 }, 00:14:40.468 { 00:14:40.468 "name": null, 00:14:40.468 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:40.468 "is_configured": false, 00:14:40.468 "data_offset": 0, 00:14:40.468 "data_size": 65536 00:14:40.468 }, 00:14:40.468 { 00:14:40.468 "name": "BaseBdev3", 00:14:40.468 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:40.468 "is_configured": true, 00:14:40.468 "data_offset": 0, 00:14:40.468 "data_size": 65536 00:14:40.468 } 00:14:40.468 ] 00:14:40.468 }' 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.468 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.728 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.728 [2024-12-12 09:28:14.739971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.989 "name": "Existed_Raid", 00:14:40.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.989 "strip_size_kb": 64, 00:14:40.989 "state": "configuring", 00:14:40.989 "raid_level": "raid5f", 00:14:40.989 "superblock": false, 00:14:40.989 "num_base_bdevs": 3, 00:14:40.989 "num_base_bdevs_discovered": 1, 00:14:40.989 "num_base_bdevs_operational": 3, 00:14:40.989 "base_bdevs_list": [ 00:14:40.989 { 00:14:40.989 "name": null, 00:14:40.989 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:40.989 "is_configured": false, 00:14:40.989 "data_offset": 0, 00:14:40.989 "data_size": 65536 00:14:40.989 }, 00:14:40.989 { 00:14:40.989 "name": null, 00:14:40.989 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:40.989 "is_configured": false, 00:14:40.989 "data_offset": 0, 00:14:40.989 "data_size": 65536 00:14:40.989 }, 00:14:40.989 { 00:14:40.989 "name": "BaseBdev3", 00:14:40.989 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:40.989 "is_configured": true, 00:14:40.989 "data_offset": 0, 00:14:40.989 "data_size": 65536 00:14:40.989 } 00:14:40.989 ] 00:14:40.989 }' 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.989 09:28:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.564 [2024-12-12 09:28:15.354505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.564 "name": "Existed_Raid", 00:14:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.564 "strip_size_kb": 64, 00:14:41.564 "state": "configuring", 00:14:41.564 "raid_level": "raid5f", 00:14:41.564 "superblock": false, 00:14:41.564 "num_base_bdevs": 3, 00:14:41.564 "num_base_bdevs_discovered": 2, 00:14:41.564 "num_base_bdevs_operational": 3, 00:14:41.564 "base_bdevs_list": [ 00:14:41.564 { 00:14:41.564 "name": null, 00:14:41.564 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:41.564 "is_configured": false, 00:14:41.564 "data_offset": 0, 00:14:41.564 "data_size": 65536 00:14:41.564 }, 00:14:41.564 { 00:14:41.564 "name": "BaseBdev2", 00:14:41.564 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:41.564 "is_configured": true, 00:14:41.564 "data_offset": 0, 00:14:41.564 "data_size": 65536 00:14:41.564 }, 00:14:41.564 { 00:14:41.564 "name": "BaseBdev3", 00:14:41.564 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:41.564 "is_configured": true, 00:14:41.564 "data_offset": 0, 00:14:41.564 "data_size": 65536 00:14:41.564 } 00:14:41.564 ] 00:14:41.564 }' 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.564 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.824 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c8f28138-be46-4e70-965c-42d7bf1e1cb1 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.085 [2024-12-12 09:28:15.902203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:42.085 [2024-12-12 09:28:15.902319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:42.085 [2024-12-12 09:28:15.902336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:42.085 [2024-12-12 09:28:15.902653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:42.085 NewBaseBdev 00:14:42.085 [2024-12-12 09:28:15.907875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:42.085 [2024-12-12 09:28:15.907907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:42.085 [2024-12-12 09:28:15.908193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.085 [ 00:14:42.085 { 00:14:42.085 "name": "NewBaseBdev", 00:14:42.085 "aliases": [ 00:14:42.085 "c8f28138-be46-4e70-965c-42d7bf1e1cb1" 00:14:42.085 ], 00:14:42.085 "product_name": "Malloc disk", 00:14:42.085 "block_size": 512, 00:14:42.085 "num_blocks": 65536, 00:14:42.085 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:42.085 "assigned_rate_limits": { 00:14:42.085 "rw_ios_per_sec": 0, 00:14:42.085 "rw_mbytes_per_sec": 0, 00:14:42.085 "r_mbytes_per_sec": 0, 00:14:42.085 "w_mbytes_per_sec": 0 00:14:42.085 }, 00:14:42.085 "claimed": true, 00:14:42.085 "claim_type": "exclusive_write", 00:14:42.085 "zoned": false, 00:14:42.085 "supported_io_types": { 00:14:42.085 "read": true, 00:14:42.085 "write": true, 00:14:42.085 "unmap": true, 00:14:42.085 "flush": true, 00:14:42.085 "reset": true, 00:14:42.085 "nvme_admin": false, 00:14:42.085 "nvme_io": false, 00:14:42.085 "nvme_io_md": false, 00:14:42.085 "write_zeroes": true, 00:14:42.085 "zcopy": true, 00:14:42.085 "get_zone_info": false, 00:14:42.085 "zone_management": false, 00:14:42.085 "zone_append": false, 00:14:42.085 "compare": false, 00:14:42.085 "compare_and_write": false, 00:14:42.085 "abort": true, 00:14:42.085 "seek_hole": false, 00:14:42.085 "seek_data": false, 00:14:42.085 "copy": true, 00:14:42.085 "nvme_iov_md": false 00:14:42.085 }, 00:14:42.085 "memory_domains": [ 00:14:42.085 { 00:14:42.085 "dma_device_id": "system", 00:14:42.085 "dma_device_type": 1 00:14:42.085 }, 00:14:42.085 { 00:14:42.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.085 "dma_device_type": 2 00:14:42.085 } 00:14:42.085 ], 00:14:42.085 "driver_specific": {} 00:14:42.085 } 00:14:42.085 ] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.085 09:28:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.085 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.085 "name": "Existed_Raid", 00:14:42.085 "uuid": "0d756471-052d-4951-bbe1-2f45fc29e55d", 00:14:42.085 "strip_size_kb": 64, 00:14:42.085 "state": "online", 00:14:42.085 "raid_level": "raid5f", 00:14:42.085 "superblock": false, 00:14:42.085 "num_base_bdevs": 3, 00:14:42.085 "num_base_bdevs_discovered": 3, 00:14:42.085 "num_base_bdevs_operational": 3, 00:14:42.085 "base_bdevs_list": [ 00:14:42.085 { 00:14:42.085 "name": "NewBaseBdev", 00:14:42.085 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:42.085 "is_configured": true, 00:14:42.085 "data_offset": 0, 00:14:42.085 "data_size": 65536 00:14:42.085 }, 00:14:42.085 { 00:14:42.085 "name": "BaseBdev2", 00:14:42.085 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:42.085 "is_configured": true, 00:14:42.085 "data_offset": 0, 00:14:42.085 "data_size": 65536 00:14:42.085 }, 00:14:42.085 { 00:14:42.085 "name": "BaseBdev3", 00:14:42.085 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:42.085 "is_configured": true, 00:14:42.085 "data_offset": 0, 00:14:42.085 "data_size": 65536 00:14:42.085 } 00:14:42.085 ] 00:14:42.085 }' 00:14:42.085 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.085 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.655 [2024-12-12 09:28:16.430303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.655 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.655 "name": "Existed_Raid", 00:14:42.655 "aliases": [ 00:14:42.655 "0d756471-052d-4951-bbe1-2f45fc29e55d" 00:14:42.655 ], 00:14:42.655 "product_name": "Raid Volume", 00:14:42.655 "block_size": 512, 00:14:42.655 "num_blocks": 131072, 00:14:42.655 "uuid": "0d756471-052d-4951-bbe1-2f45fc29e55d", 00:14:42.655 "assigned_rate_limits": { 00:14:42.655 "rw_ios_per_sec": 0, 00:14:42.655 "rw_mbytes_per_sec": 0, 00:14:42.655 "r_mbytes_per_sec": 0, 00:14:42.655 "w_mbytes_per_sec": 0 00:14:42.655 }, 00:14:42.655 "claimed": false, 00:14:42.655 "zoned": false, 00:14:42.655 "supported_io_types": { 00:14:42.655 "read": true, 00:14:42.655 "write": true, 00:14:42.655 "unmap": false, 00:14:42.655 "flush": false, 00:14:42.655 "reset": true, 00:14:42.655 "nvme_admin": false, 00:14:42.655 "nvme_io": false, 00:14:42.655 "nvme_io_md": false, 00:14:42.655 "write_zeroes": true, 00:14:42.655 "zcopy": false, 00:14:42.655 "get_zone_info": false, 00:14:42.655 "zone_management": false, 00:14:42.655 "zone_append": false, 00:14:42.655 "compare": false, 00:14:42.655 "compare_and_write": false, 00:14:42.655 "abort": false, 00:14:42.655 "seek_hole": false, 00:14:42.655 "seek_data": false, 00:14:42.655 "copy": false, 00:14:42.655 "nvme_iov_md": false 00:14:42.655 }, 00:14:42.655 "driver_specific": { 00:14:42.655 "raid": { 00:14:42.655 "uuid": "0d756471-052d-4951-bbe1-2f45fc29e55d", 00:14:42.655 "strip_size_kb": 64, 00:14:42.655 "state": "online", 00:14:42.655 "raid_level": "raid5f", 00:14:42.655 "superblock": false, 00:14:42.655 "num_base_bdevs": 3, 00:14:42.655 "num_base_bdevs_discovered": 3, 00:14:42.655 "num_base_bdevs_operational": 3, 00:14:42.655 "base_bdevs_list": [ 00:14:42.655 { 00:14:42.655 "name": "NewBaseBdev", 00:14:42.655 "uuid": "c8f28138-be46-4e70-965c-42d7bf1e1cb1", 00:14:42.655 "is_configured": true, 00:14:42.655 "data_offset": 0, 00:14:42.655 "data_size": 65536 00:14:42.655 }, 00:14:42.655 { 00:14:42.655 "name": "BaseBdev2", 00:14:42.655 "uuid": "1b7f1365-20e5-4518-9661-f5b8ceb83dba", 00:14:42.655 "is_configured": true, 00:14:42.655 "data_offset": 0, 00:14:42.655 "data_size": 65536 00:14:42.655 }, 00:14:42.655 { 00:14:42.655 "name": "BaseBdev3", 00:14:42.655 "uuid": "59f6dcef-be4e-410b-a766-64eeebd4692b", 00:14:42.655 "is_configured": true, 00:14:42.655 "data_offset": 0, 00:14:42.655 "data_size": 65536 00:14:42.655 } 00:14:42.656 ] 00:14:42.656 } 00:14:42.656 } 00:14:42.656 }' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:42.656 BaseBdev2 00:14:42.656 BaseBdev3' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.656 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.916 [2024-12-12 09:28:16.717704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.916 [2024-12-12 09:28:16.717769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.916 [2024-12-12 09:28:16.717878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.916 [2024-12-12 09:28:16.718215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.916 [2024-12-12 09:28:16.718273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81020 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81020 ']' 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81020 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81020 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81020' 00:14:42.916 killing process with pid 81020 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 81020 00:14:42.916 [2024-12-12 09:28:16.767896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.916 09:28:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 81020 00:14:43.176 [2024-12-12 09:28:17.077690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:44.559 00:14:44.559 real 0m10.812s 00:14:44.559 user 0m16.961s 00:14:44.559 sys 0m2.107s 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.559 ************************************ 00:14:44.559 END TEST raid5f_state_function_test 00:14:44.559 ************************************ 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.559 09:28:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:44.559 09:28:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:44.559 09:28:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.559 09:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.559 ************************************ 00:14:44.559 START TEST raid5f_state_function_test_sb 00:14:44.559 ************************************ 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81647 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:44.559 Process raid pid: 81647 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81647' 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81647 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81647 ']' 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.559 09:28:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.559 [2024-12-12 09:28:18.436664] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:44.559 [2024-12-12 09:28:18.436906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.831 [2024-12-12 09:28:18.617857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.831 [2024-12-12 09:28:18.749368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.112 [2024-12-12 09:28:18.977164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.112 [2024-12-12 09:28:18.977251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 [2024-12-12 09:28:19.249156] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.390 [2024-12-12 09:28:19.249302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.390 [2024-12-12 09:28:19.249336] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.390 [2024-12-12 09:28:19.249360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.390 [2024-12-12 09:28:19.249401] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.390 [2024-12-12 09:28:19.249424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.390 "name": "Existed_Raid", 00:14:45.390 "uuid": "0c3ec0ca-fd5e-4778-befb-363342025a28", 00:14:45.390 "strip_size_kb": 64, 00:14:45.390 "state": "configuring", 00:14:45.390 "raid_level": "raid5f", 00:14:45.390 "superblock": true, 00:14:45.390 "num_base_bdevs": 3, 00:14:45.390 "num_base_bdevs_discovered": 0, 00:14:45.390 "num_base_bdevs_operational": 3, 00:14:45.390 "base_bdevs_list": [ 00:14:45.390 { 00:14:45.390 "name": "BaseBdev1", 00:14:45.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.390 "is_configured": false, 00:14:45.390 "data_offset": 0, 00:14:45.390 "data_size": 0 00:14:45.390 }, 00:14:45.390 { 00:14:45.390 "name": "BaseBdev2", 00:14:45.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.390 "is_configured": false, 00:14:45.390 "data_offset": 0, 00:14:45.390 "data_size": 0 00:14:45.390 }, 00:14:45.390 { 00:14:45.390 "name": "BaseBdev3", 00:14:45.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.390 "is_configured": false, 00:14:45.390 "data_offset": 0, 00:14:45.390 "data_size": 0 00:14:45.390 } 00:14:45.390 ] 00:14:45.390 }' 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.390 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [2024-12-12 09:28:19.716217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.973 [2024-12-12 09:28:19.716293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [2024-12-12 09:28:19.728217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.973 [2024-12-12 09:28:19.728297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.973 [2024-12-12 09:28:19.728340] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.973 [2024-12-12 09:28:19.728362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.973 [2024-12-12 09:28:19.728380] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.973 [2024-12-12 09:28:19.728401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [2024-12-12 09:28:19.783287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.973 BaseBdev1 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [ 00:14:45.973 { 00:14:45.973 "name": "BaseBdev1", 00:14:45.973 "aliases": [ 00:14:45.973 "b7ebef86-cc09-46e3-acce-ef639c4f7ca0" 00:14:45.973 ], 00:14:45.973 "product_name": "Malloc disk", 00:14:45.973 "block_size": 512, 00:14:45.973 "num_blocks": 65536, 00:14:45.973 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:45.973 "assigned_rate_limits": { 00:14:45.973 "rw_ios_per_sec": 0, 00:14:45.973 "rw_mbytes_per_sec": 0, 00:14:45.973 "r_mbytes_per_sec": 0, 00:14:45.973 "w_mbytes_per_sec": 0 00:14:45.973 }, 00:14:45.973 "claimed": true, 00:14:45.973 "claim_type": "exclusive_write", 00:14:45.973 "zoned": false, 00:14:45.973 "supported_io_types": { 00:14:45.973 "read": true, 00:14:45.973 "write": true, 00:14:45.973 "unmap": true, 00:14:45.974 "flush": true, 00:14:45.974 "reset": true, 00:14:45.974 "nvme_admin": false, 00:14:45.974 "nvme_io": false, 00:14:45.974 "nvme_io_md": false, 00:14:45.974 "write_zeroes": true, 00:14:45.974 "zcopy": true, 00:14:45.974 "get_zone_info": false, 00:14:45.974 "zone_management": false, 00:14:45.974 "zone_append": false, 00:14:45.974 "compare": false, 00:14:45.974 "compare_and_write": false, 00:14:45.974 "abort": true, 00:14:45.974 "seek_hole": false, 00:14:45.974 "seek_data": false, 00:14:45.974 "copy": true, 00:14:45.974 "nvme_iov_md": false 00:14:45.974 }, 00:14:45.974 "memory_domains": [ 00:14:45.974 { 00:14:45.974 "dma_device_id": "system", 00:14:45.974 "dma_device_type": 1 00:14:45.974 }, 00:14:45.974 { 00:14:45.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.974 "dma_device_type": 2 00:14:45.974 } 00:14:45.974 ], 00:14:45.974 "driver_specific": {} 00:14:45.974 } 00:14:45.974 ] 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.974 "name": "Existed_Raid", 00:14:45.974 "uuid": "23ae52eb-71cf-4ac6-8fd6-b07a31754a96", 00:14:45.974 "strip_size_kb": 64, 00:14:45.974 "state": "configuring", 00:14:45.974 "raid_level": "raid5f", 00:14:45.974 "superblock": true, 00:14:45.974 "num_base_bdevs": 3, 00:14:45.974 "num_base_bdevs_discovered": 1, 00:14:45.974 "num_base_bdevs_operational": 3, 00:14:45.974 "base_bdevs_list": [ 00:14:45.974 { 00:14:45.974 "name": "BaseBdev1", 00:14:45.974 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:45.974 "is_configured": true, 00:14:45.974 "data_offset": 2048, 00:14:45.974 "data_size": 63488 00:14:45.974 }, 00:14:45.974 { 00:14:45.974 "name": "BaseBdev2", 00:14:45.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.974 "is_configured": false, 00:14:45.974 "data_offset": 0, 00:14:45.974 "data_size": 0 00:14:45.974 }, 00:14:45.974 { 00:14:45.974 "name": "BaseBdev3", 00:14:45.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.974 "is_configured": false, 00:14:45.974 "data_offset": 0, 00:14:45.974 "data_size": 0 00:14:45.974 } 00:14:45.974 ] 00:14:45.974 }' 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.974 09:28:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.544 [2024-12-12 09:28:20.294402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.544 [2024-12-12 09:28:20.294443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.544 [2024-12-12 09:28:20.306442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.544 [2024-12-12 09:28:20.308581] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.544 [2024-12-12 09:28:20.308675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.544 [2024-12-12 09:28:20.308704] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.544 [2024-12-12 09:28:20.308727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.544 "name": "Existed_Raid", 00:14:46.544 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:46.544 "strip_size_kb": 64, 00:14:46.544 "state": "configuring", 00:14:46.544 "raid_level": "raid5f", 00:14:46.544 "superblock": true, 00:14:46.544 "num_base_bdevs": 3, 00:14:46.544 "num_base_bdevs_discovered": 1, 00:14:46.544 "num_base_bdevs_operational": 3, 00:14:46.544 "base_bdevs_list": [ 00:14:46.544 { 00:14:46.544 "name": "BaseBdev1", 00:14:46.544 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:46.544 "is_configured": true, 00:14:46.544 "data_offset": 2048, 00:14:46.544 "data_size": 63488 00:14:46.544 }, 00:14:46.544 { 00:14:46.544 "name": "BaseBdev2", 00:14:46.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.544 "is_configured": false, 00:14:46.544 "data_offset": 0, 00:14:46.544 "data_size": 0 00:14:46.544 }, 00:14:46.544 { 00:14:46.544 "name": "BaseBdev3", 00:14:46.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.544 "is_configured": false, 00:14:46.544 "data_offset": 0, 00:14:46.544 "data_size": 0 00:14:46.544 } 00:14:46.544 ] 00:14:46.544 }' 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.544 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.804 [2024-12-12 09:28:20.803229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.804 BaseBdev2 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.804 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.064 [ 00:14:47.064 { 00:14:47.064 "name": "BaseBdev2", 00:14:47.064 "aliases": [ 00:14:47.064 "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c" 00:14:47.064 ], 00:14:47.064 "product_name": "Malloc disk", 00:14:47.064 "block_size": 512, 00:14:47.064 "num_blocks": 65536, 00:14:47.064 "uuid": "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c", 00:14:47.064 "assigned_rate_limits": { 00:14:47.064 "rw_ios_per_sec": 0, 00:14:47.064 "rw_mbytes_per_sec": 0, 00:14:47.064 "r_mbytes_per_sec": 0, 00:14:47.064 "w_mbytes_per_sec": 0 00:14:47.064 }, 00:14:47.064 "claimed": true, 00:14:47.064 "claim_type": "exclusive_write", 00:14:47.064 "zoned": false, 00:14:47.064 "supported_io_types": { 00:14:47.064 "read": true, 00:14:47.064 "write": true, 00:14:47.064 "unmap": true, 00:14:47.064 "flush": true, 00:14:47.064 "reset": true, 00:14:47.064 "nvme_admin": false, 00:14:47.064 "nvme_io": false, 00:14:47.064 "nvme_io_md": false, 00:14:47.064 "write_zeroes": true, 00:14:47.064 "zcopy": true, 00:14:47.064 "get_zone_info": false, 00:14:47.064 "zone_management": false, 00:14:47.064 "zone_append": false, 00:14:47.064 "compare": false, 00:14:47.064 "compare_and_write": false, 00:14:47.064 "abort": true, 00:14:47.064 "seek_hole": false, 00:14:47.064 "seek_data": false, 00:14:47.064 "copy": true, 00:14:47.064 "nvme_iov_md": false 00:14:47.064 }, 00:14:47.064 "memory_domains": [ 00:14:47.064 { 00:14:47.064 "dma_device_id": "system", 00:14:47.064 "dma_device_type": 1 00:14:47.064 }, 00:14:47.064 { 00:14:47.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.064 "dma_device_type": 2 00:14:47.064 } 00:14:47.064 ], 00:14:47.064 "driver_specific": {} 00:14:47.064 } 00:14:47.064 ] 00:14:47.064 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.065 "name": "Existed_Raid", 00:14:47.065 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:47.065 "strip_size_kb": 64, 00:14:47.065 "state": "configuring", 00:14:47.065 "raid_level": "raid5f", 00:14:47.065 "superblock": true, 00:14:47.065 "num_base_bdevs": 3, 00:14:47.065 "num_base_bdevs_discovered": 2, 00:14:47.065 "num_base_bdevs_operational": 3, 00:14:47.065 "base_bdevs_list": [ 00:14:47.065 { 00:14:47.065 "name": "BaseBdev1", 00:14:47.065 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:47.065 "is_configured": true, 00:14:47.065 "data_offset": 2048, 00:14:47.065 "data_size": 63488 00:14:47.065 }, 00:14:47.065 { 00:14:47.065 "name": "BaseBdev2", 00:14:47.065 "uuid": "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c", 00:14:47.065 "is_configured": true, 00:14:47.065 "data_offset": 2048, 00:14:47.065 "data_size": 63488 00:14:47.065 }, 00:14:47.065 { 00:14:47.065 "name": "BaseBdev3", 00:14:47.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.065 "is_configured": false, 00:14:47.065 "data_offset": 0, 00:14:47.065 "data_size": 0 00:14:47.065 } 00:14:47.065 ] 00:14:47.065 }' 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.065 09:28:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.325 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.325 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.325 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 [2024-12-12 09:28:21.349904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.585 [2024-12-12 09:28:21.350315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:47.585 [2024-12-12 09:28:21.350379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:47.585 [2024-12-12 09:28:21.350699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:47.585 BaseBdev3 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 [2024-12-12 09:28:21.356303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:47.585 [2024-12-12 09:28:21.356362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:47.585 [2024-12-12 09:28:21.356613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 [ 00:14:47.585 { 00:14:47.585 "name": "BaseBdev3", 00:14:47.585 "aliases": [ 00:14:47.585 "c82c2bd3-58e3-4d70-8a53-060aaec8f388" 00:14:47.585 ], 00:14:47.585 "product_name": "Malloc disk", 00:14:47.585 "block_size": 512, 00:14:47.585 "num_blocks": 65536, 00:14:47.585 "uuid": "c82c2bd3-58e3-4d70-8a53-060aaec8f388", 00:14:47.585 "assigned_rate_limits": { 00:14:47.585 "rw_ios_per_sec": 0, 00:14:47.585 "rw_mbytes_per_sec": 0, 00:14:47.585 "r_mbytes_per_sec": 0, 00:14:47.585 "w_mbytes_per_sec": 0 00:14:47.585 }, 00:14:47.585 "claimed": true, 00:14:47.585 "claim_type": "exclusive_write", 00:14:47.585 "zoned": false, 00:14:47.585 "supported_io_types": { 00:14:47.585 "read": true, 00:14:47.585 "write": true, 00:14:47.585 "unmap": true, 00:14:47.585 "flush": true, 00:14:47.585 "reset": true, 00:14:47.585 "nvme_admin": false, 00:14:47.585 "nvme_io": false, 00:14:47.585 "nvme_io_md": false, 00:14:47.585 "write_zeroes": true, 00:14:47.585 "zcopy": true, 00:14:47.585 "get_zone_info": false, 00:14:47.585 "zone_management": false, 00:14:47.585 "zone_append": false, 00:14:47.585 "compare": false, 00:14:47.585 "compare_and_write": false, 00:14:47.585 "abort": true, 00:14:47.585 "seek_hole": false, 00:14:47.585 "seek_data": false, 00:14:47.585 "copy": true, 00:14:47.585 "nvme_iov_md": false 00:14:47.585 }, 00:14:47.585 "memory_domains": [ 00:14:47.585 { 00:14:47.585 "dma_device_id": "system", 00:14:47.585 "dma_device_type": 1 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.585 "dma_device_type": 2 00:14:47.585 } 00:14:47.585 ], 00:14:47.585 "driver_specific": {} 00:14:47.585 } 00:14:47.585 ] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.585 "name": "Existed_Raid", 00:14:47.585 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:47.585 "strip_size_kb": 64, 00:14:47.585 "state": "online", 00:14:47.585 "raid_level": "raid5f", 00:14:47.585 "superblock": true, 00:14:47.585 "num_base_bdevs": 3, 00:14:47.585 "num_base_bdevs_discovered": 3, 00:14:47.585 "num_base_bdevs_operational": 3, 00:14:47.585 "base_bdevs_list": [ 00:14:47.585 { 00:14:47.585 "name": "BaseBdev1", 00:14:47.585 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:47.585 "is_configured": true, 00:14:47.585 "data_offset": 2048, 00:14:47.585 "data_size": 63488 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "name": "BaseBdev2", 00:14:47.585 "uuid": "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c", 00:14:47.585 "is_configured": true, 00:14:47.585 "data_offset": 2048, 00:14:47.585 "data_size": 63488 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "name": "BaseBdev3", 00:14:47.585 "uuid": "c82c2bd3-58e3-4d70-8a53-060aaec8f388", 00:14:47.585 "is_configured": true, 00:14:47.585 "data_offset": 2048, 00:14:47.585 "data_size": 63488 00:14:47.585 } 00:14:47.585 ] 00:14:47.585 }' 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.585 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.845 [2024-12-12 09:28:21.818808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.845 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.845 "name": "Existed_Raid", 00:14:47.845 "aliases": [ 00:14:47.845 "98e54bcd-64f2-45a1-b3e4-cf559983aa67" 00:14:47.845 ], 00:14:47.845 "product_name": "Raid Volume", 00:14:47.845 "block_size": 512, 00:14:47.845 "num_blocks": 126976, 00:14:47.845 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:47.845 "assigned_rate_limits": { 00:14:47.845 "rw_ios_per_sec": 0, 00:14:47.845 "rw_mbytes_per_sec": 0, 00:14:47.845 "r_mbytes_per_sec": 0, 00:14:47.845 "w_mbytes_per_sec": 0 00:14:47.845 }, 00:14:47.845 "claimed": false, 00:14:47.845 "zoned": false, 00:14:47.845 "supported_io_types": { 00:14:47.845 "read": true, 00:14:47.845 "write": true, 00:14:47.845 "unmap": false, 00:14:47.845 "flush": false, 00:14:47.845 "reset": true, 00:14:47.845 "nvme_admin": false, 00:14:47.845 "nvme_io": false, 00:14:47.845 "nvme_io_md": false, 00:14:47.845 "write_zeroes": true, 00:14:47.845 "zcopy": false, 00:14:47.845 "get_zone_info": false, 00:14:47.845 "zone_management": false, 00:14:47.845 "zone_append": false, 00:14:47.845 "compare": false, 00:14:47.845 "compare_and_write": false, 00:14:47.845 "abort": false, 00:14:47.845 "seek_hole": false, 00:14:47.845 "seek_data": false, 00:14:47.845 "copy": false, 00:14:47.845 "nvme_iov_md": false 00:14:47.845 }, 00:14:47.845 "driver_specific": { 00:14:47.845 "raid": { 00:14:47.845 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:47.845 "strip_size_kb": 64, 00:14:47.845 "state": "online", 00:14:47.846 "raid_level": "raid5f", 00:14:47.846 "superblock": true, 00:14:47.846 "num_base_bdevs": 3, 00:14:47.846 "num_base_bdevs_discovered": 3, 00:14:47.846 "num_base_bdevs_operational": 3, 00:14:47.846 "base_bdevs_list": [ 00:14:47.846 { 00:14:47.846 "name": "BaseBdev1", 00:14:47.846 "uuid": "b7ebef86-cc09-46e3-acce-ef639c4f7ca0", 00:14:47.846 "is_configured": true, 00:14:47.846 "data_offset": 2048, 00:14:47.846 "data_size": 63488 00:14:47.846 }, 00:14:47.846 { 00:14:47.846 "name": "BaseBdev2", 00:14:47.846 "uuid": "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c", 00:14:47.846 "is_configured": true, 00:14:47.846 "data_offset": 2048, 00:14:47.846 "data_size": 63488 00:14:47.846 }, 00:14:47.846 { 00:14:47.846 "name": "BaseBdev3", 00:14:47.846 "uuid": "c82c2bd3-58e3-4d70-8a53-060aaec8f388", 00:14:47.846 "is_configured": true, 00:14:47.846 "data_offset": 2048, 00:14:47.846 "data_size": 63488 00:14:47.846 } 00:14:47.846 ] 00:14:47.846 } 00:14:47.846 } 00:14:47.846 }' 00:14:47.846 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:48.106 BaseBdev2 00:14:48.106 BaseBdev3' 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.106 09:28:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.106 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.106 [2024-12-12 09:28:22.114162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.366 "name": "Existed_Raid", 00:14:48.366 "uuid": "98e54bcd-64f2-45a1-b3e4-cf559983aa67", 00:14:48.366 "strip_size_kb": 64, 00:14:48.366 "state": "online", 00:14:48.366 "raid_level": "raid5f", 00:14:48.366 "superblock": true, 00:14:48.366 "num_base_bdevs": 3, 00:14:48.366 "num_base_bdevs_discovered": 2, 00:14:48.366 "num_base_bdevs_operational": 2, 00:14:48.366 "base_bdevs_list": [ 00:14:48.366 { 00:14:48.366 "name": null, 00:14:48.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.366 "is_configured": false, 00:14:48.366 "data_offset": 0, 00:14:48.366 "data_size": 63488 00:14:48.366 }, 00:14:48.366 { 00:14:48.366 "name": "BaseBdev2", 00:14:48.366 "uuid": "8a923ea8-6c77-42ab-b6de-4391e5dc5a2c", 00:14:48.366 "is_configured": true, 00:14:48.366 "data_offset": 2048, 00:14:48.366 "data_size": 63488 00:14:48.366 }, 00:14:48.366 { 00:14:48.366 "name": "BaseBdev3", 00:14:48.366 "uuid": "c82c2bd3-58e3-4d70-8a53-060aaec8f388", 00:14:48.366 "is_configured": true, 00:14:48.366 "data_offset": 2048, 00:14:48.366 "data_size": 63488 00:14:48.366 } 00:14:48.366 ] 00:14:48.366 }' 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.366 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 [2024-12-12 09:28:22.740353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.936 [2024-12-12 09:28:22.740601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.936 [2024-12-12 09:28:22.839126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 [2024-12-12 09:28:22.895089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.936 [2024-12-12 09:28:22.895200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 BaseBdev2 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 [ 00:14:49.197 { 00:14:49.197 "name": "BaseBdev2", 00:14:49.197 "aliases": [ 00:14:49.197 "8b86bdf3-3de5-4e09-a879-c34a3c61a407" 00:14:49.197 ], 00:14:49.197 "product_name": "Malloc disk", 00:14:49.197 "block_size": 512, 00:14:49.197 "num_blocks": 65536, 00:14:49.197 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:49.197 "assigned_rate_limits": { 00:14:49.197 "rw_ios_per_sec": 0, 00:14:49.197 "rw_mbytes_per_sec": 0, 00:14:49.197 "r_mbytes_per_sec": 0, 00:14:49.197 "w_mbytes_per_sec": 0 00:14:49.197 }, 00:14:49.197 "claimed": false, 00:14:49.197 "zoned": false, 00:14:49.197 "supported_io_types": { 00:14:49.197 "read": true, 00:14:49.197 "write": true, 00:14:49.197 "unmap": true, 00:14:49.197 "flush": true, 00:14:49.197 "reset": true, 00:14:49.197 "nvme_admin": false, 00:14:49.197 "nvme_io": false, 00:14:49.197 "nvme_io_md": false, 00:14:49.197 "write_zeroes": true, 00:14:49.197 "zcopy": true, 00:14:49.197 "get_zone_info": false, 00:14:49.197 "zone_management": false, 00:14:49.197 "zone_append": false, 00:14:49.197 "compare": false, 00:14:49.197 "compare_and_write": false, 00:14:49.197 "abort": true, 00:14:49.197 "seek_hole": false, 00:14:49.197 "seek_data": false, 00:14:49.197 "copy": true, 00:14:49.197 "nvme_iov_md": false 00:14:49.197 }, 00:14:49.197 "memory_domains": [ 00:14:49.197 { 00:14:49.197 "dma_device_id": "system", 00:14:49.197 "dma_device_type": 1 00:14:49.197 }, 00:14:49.197 { 00:14:49.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.197 "dma_device_type": 2 00:14:49.197 } 00:14:49.197 ], 00:14:49.197 "driver_specific": {} 00:14:49.197 } 00:14:49.197 ] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 BaseBdev3 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 [ 00:14:49.197 { 00:14:49.197 "name": "BaseBdev3", 00:14:49.197 "aliases": [ 00:14:49.197 "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed" 00:14:49.197 ], 00:14:49.197 "product_name": "Malloc disk", 00:14:49.197 "block_size": 512, 00:14:49.197 "num_blocks": 65536, 00:14:49.197 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:49.197 "assigned_rate_limits": { 00:14:49.197 "rw_ios_per_sec": 0, 00:14:49.197 "rw_mbytes_per_sec": 0, 00:14:49.197 "r_mbytes_per_sec": 0, 00:14:49.197 "w_mbytes_per_sec": 0 00:14:49.197 }, 00:14:49.197 "claimed": false, 00:14:49.197 "zoned": false, 00:14:49.197 "supported_io_types": { 00:14:49.197 "read": true, 00:14:49.197 "write": true, 00:14:49.197 "unmap": true, 00:14:49.197 "flush": true, 00:14:49.197 "reset": true, 00:14:49.197 "nvme_admin": false, 00:14:49.197 "nvme_io": false, 00:14:49.197 "nvme_io_md": false, 00:14:49.197 "write_zeroes": true, 00:14:49.197 "zcopy": true, 00:14:49.197 "get_zone_info": false, 00:14:49.197 "zone_management": false, 00:14:49.197 "zone_append": false, 00:14:49.197 "compare": false, 00:14:49.197 "compare_and_write": false, 00:14:49.197 "abort": true, 00:14:49.197 "seek_hole": false, 00:14:49.197 "seek_data": false, 00:14:49.197 "copy": true, 00:14:49.197 "nvme_iov_md": false 00:14:49.197 }, 00:14:49.197 "memory_domains": [ 00:14:49.197 { 00:14:49.197 "dma_device_id": "system", 00:14:49.197 "dma_device_type": 1 00:14:49.197 }, 00:14:49.197 { 00:14:49.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.197 "dma_device_type": 2 00:14:49.197 } 00:14:49.197 ], 00:14:49.197 "driver_specific": {} 00:14:49.197 } 00:14:49.197 ] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.197 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 [2024-12-12 09:28:23.214866] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.197 [2024-12-12 09:28:23.214997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.197 [2024-12-12 09:28:23.215062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.197 [2024-12-12 09:28:23.217189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.457 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.458 "name": "Existed_Raid", 00:14:49.458 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:49.458 "strip_size_kb": 64, 00:14:49.458 "state": "configuring", 00:14:49.458 "raid_level": "raid5f", 00:14:49.458 "superblock": true, 00:14:49.458 "num_base_bdevs": 3, 00:14:49.458 "num_base_bdevs_discovered": 2, 00:14:49.458 "num_base_bdevs_operational": 3, 00:14:49.458 "base_bdevs_list": [ 00:14:49.458 { 00:14:49.458 "name": "BaseBdev1", 00:14:49.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.458 "is_configured": false, 00:14:49.458 "data_offset": 0, 00:14:49.458 "data_size": 0 00:14:49.458 }, 00:14:49.458 { 00:14:49.458 "name": "BaseBdev2", 00:14:49.458 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:49.458 "is_configured": true, 00:14:49.458 "data_offset": 2048, 00:14:49.458 "data_size": 63488 00:14:49.458 }, 00:14:49.458 { 00:14:49.458 "name": "BaseBdev3", 00:14:49.458 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:49.458 "is_configured": true, 00:14:49.458 "data_offset": 2048, 00:14:49.458 "data_size": 63488 00:14:49.458 } 00:14:49.458 ] 00:14:49.458 }' 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.458 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.718 [2024-12-12 09:28:23.662067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.718 "name": "Existed_Raid", 00:14:49.718 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:49.718 "strip_size_kb": 64, 00:14:49.718 "state": "configuring", 00:14:49.718 "raid_level": "raid5f", 00:14:49.718 "superblock": true, 00:14:49.718 "num_base_bdevs": 3, 00:14:49.718 "num_base_bdevs_discovered": 1, 00:14:49.718 "num_base_bdevs_operational": 3, 00:14:49.718 "base_bdevs_list": [ 00:14:49.718 { 00:14:49.718 "name": "BaseBdev1", 00:14:49.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.718 "is_configured": false, 00:14:49.718 "data_offset": 0, 00:14:49.718 "data_size": 0 00:14:49.718 }, 00:14:49.718 { 00:14:49.718 "name": null, 00:14:49.718 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:49.718 "is_configured": false, 00:14:49.718 "data_offset": 0, 00:14:49.718 "data_size": 63488 00:14:49.718 }, 00:14:49.718 { 00:14:49.718 "name": "BaseBdev3", 00:14:49.718 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:49.718 "is_configured": true, 00:14:49.718 "data_offset": 2048, 00:14:49.718 "data_size": 63488 00:14:49.718 } 00:14:49.718 ] 00:14:49.718 }' 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.718 09:28:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.288 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.289 [2024-12-12 09:28:24.201205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.289 BaseBdev1 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.289 [ 00:14:50.289 { 00:14:50.289 "name": "BaseBdev1", 00:14:50.289 "aliases": [ 00:14:50.289 "84b76c98-f5f6-4952-a940-3941ed615304" 00:14:50.289 ], 00:14:50.289 "product_name": "Malloc disk", 00:14:50.289 "block_size": 512, 00:14:50.289 "num_blocks": 65536, 00:14:50.289 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:50.289 "assigned_rate_limits": { 00:14:50.289 "rw_ios_per_sec": 0, 00:14:50.289 "rw_mbytes_per_sec": 0, 00:14:50.289 "r_mbytes_per_sec": 0, 00:14:50.289 "w_mbytes_per_sec": 0 00:14:50.289 }, 00:14:50.289 "claimed": true, 00:14:50.289 "claim_type": "exclusive_write", 00:14:50.289 "zoned": false, 00:14:50.289 "supported_io_types": { 00:14:50.289 "read": true, 00:14:50.289 "write": true, 00:14:50.289 "unmap": true, 00:14:50.289 "flush": true, 00:14:50.289 "reset": true, 00:14:50.289 "nvme_admin": false, 00:14:50.289 "nvme_io": false, 00:14:50.289 "nvme_io_md": false, 00:14:50.289 "write_zeroes": true, 00:14:50.289 "zcopy": true, 00:14:50.289 "get_zone_info": false, 00:14:50.289 "zone_management": false, 00:14:50.289 "zone_append": false, 00:14:50.289 "compare": false, 00:14:50.289 "compare_and_write": false, 00:14:50.289 "abort": true, 00:14:50.289 "seek_hole": false, 00:14:50.289 "seek_data": false, 00:14:50.289 "copy": true, 00:14:50.289 "nvme_iov_md": false 00:14:50.289 }, 00:14:50.289 "memory_domains": [ 00:14:50.289 { 00:14:50.289 "dma_device_id": "system", 00:14:50.289 "dma_device_type": 1 00:14:50.289 }, 00:14:50.289 { 00:14:50.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.289 "dma_device_type": 2 00:14:50.289 } 00:14:50.289 ], 00:14:50.289 "driver_specific": {} 00:14:50.289 } 00:14:50.289 ] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.289 "name": "Existed_Raid", 00:14:50.289 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:50.289 "strip_size_kb": 64, 00:14:50.289 "state": "configuring", 00:14:50.289 "raid_level": "raid5f", 00:14:50.289 "superblock": true, 00:14:50.289 "num_base_bdevs": 3, 00:14:50.289 "num_base_bdevs_discovered": 2, 00:14:50.289 "num_base_bdevs_operational": 3, 00:14:50.289 "base_bdevs_list": [ 00:14:50.289 { 00:14:50.289 "name": "BaseBdev1", 00:14:50.289 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:50.289 "is_configured": true, 00:14:50.289 "data_offset": 2048, 00:14:50.289 "data_size": 63488 00:14:50.289 }, 00:14:50.289 { 00:14:50.289 "name": null, 00:14:50.289 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:50.289 "is_configured": false, 00:14:50.289 "data_offset": 0, 00:14:50.289 "data_size": 63488 00:14:50.289 }, 00:14:50.289 { 00:14:50.289 "name": "BaseBdev3", 00:14:50.289 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:50.289 "is_configured": true, 00:14:50.289 "data_offset": 2048, 00:14:50.289 "data_size": 63488 00:14:50.289 } 00:14:50.289 ] 00:14:50.289 }' 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.289 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.858 [2024-12-12 09:28:24.780227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.858 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.859 "name": "Existed_Raid", 00:14:50.859 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:50.859 "strip_size_kb": 64, 00:14:50.859 "state": "configuring", 00:14:50.859 "raid_level": "raid5f", 00:14:50.859 "superblock": true, 00:14:50.859 "num_base_bdevs": 3, 00:14:50.859 "num_base_bdevs_discovered": 1, 00:14:50.859 "num_base_bdevs_operational": 3, 00:14:50.859 "base_bdevs_list": [ 00:14:50.859 { 00:14:50.859 "name": "BaseBdev1", 00:14:50.859 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:50.859 "is_configured": true, 00:14:50.859 "data_offset": 2048, 00:14:50.859 "data_size": 63488 00:14:50.859 }, 00:14:50.859 { 00:14:50.859 "name": null, 00:14:50.859 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:50.859 "is_configured": false, 00:14:50.859 "data_offset": 0, 00:14:50.859 "data_size": 63488 00:14:50.859 }, 00:14:50.859 { 00:14:50.859 "name": null, 00:14:50.859 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:50.859 "is_configured": false, 00:14:50.859 "data_offset": 0, 00:14:50.859 "data_size": 63488 00:14:50.859 } 00:14:50.859 ] 00:14:50.859 }' 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.859 09:28:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.428 [2024-12-12 09:28:25.283449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.428 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.428 "name": "Existed_Raid", 00:14:51.428 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:51.428 "strip_size_kb": 64, 00:14:51.428 "state": "configuring", 00:14:51.428 "raid_level": "raid5f", 00:14:51.428 "superblock": true, 00:14:51.428 "num_base_bdevs": 3, 00:14:51.428 "num_base_bdevs_discovered": 2, 00:14:51.428 "num_base_bdevs_operational": 3, 00:14:51.428 "base_bdevs_list": [ 00:14:51.428 { 00:14:51.428 "name": "BaseBdev1", 00:14:51.428 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:51.428 "is_configured": true, 00:14:51.428 "data_offset": 2048, 00:14:51.428 "data_size": 63488 00:14:51.428 }, 00:14:51.428 { 00:14:51.428 "name": null, 00:14:51.428 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:51.428 "is_configured": false, 00:14:51.428 "data_offset": 0, 00:14:51.428 "data_size": 63488 00:14:51.428 }, 00:14:51.428 { 00:14:51.428 "name": "BaseBdev3", 00:14:51.428 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:51.428 "is_configured": true, 00:14:51.428 "data_offset": 2048, 00:14:51.428 "data_size": 63488 00:14:51.428 } 00:14:51.428 ] 00:14:51.428 }' 00:14:51.429 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.429 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 [2024-12-12 09:28:25.814539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.999 "name": "Existed_Raid", 00:14:51.999 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:51.999 "strip_size_kb": 64, 00:14:51.999 "state": "configuring", 00:14:51.999 "raid_level": "raid5f", 00:14:51.999 "superblock": true, 00:14:51.999 "num_base_bdevs": 3, 00:14:51.999 "num_base_bdevs_discovered": 1, 00:14:51.999 "num_base_bdevs_operational": 3, 00:14:51.999 "base_bdevs_list": [ 00:14:51.999 { 00:14:51.999 "name": null, 00:14:51.999 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:51.999 "is_configured": false, 00:14:51.999 "data_offset": 0, 00:14:51.999 "data_size": 63488 00:14:51.999 }, 00:14:51.999 { 00:14:51.999 "name": null, 00:14:51.999 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:51.999 "is_configured": false, 00:14:51.999 "data_offset": 0, 00:14:51.999 "data_size": 63488 00:14:51.999 }, 00:14:51.999 { 00:14:51.999 "name": "BaseBdev3", 00:14:51.999 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:51.999 "is_configured": true, 00:14:51.999 "data_offset": 2048, 00:14:51.999 "data_size": 63488 00:14:51.999 } 00:14:51.999 ] 00:14:51.999 }' 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.999 09:28:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.569 [2024-12-12 09:28:26.426790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.569 "name": "Existed_Raid", 00:14:52.569 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:52.569 "strip_size_kb": 64, 00:14:52.569 "state": "configuring", 00:14:52.569 "raid_level": "raid5f", 00:14:52.569 "superblock": true, 00:14:52.569 "num_base_bdevs": 3, 00:14:52.569 "num_base_bdevs_discovered": 2, 00:14:52.569 "num_base_bdevs_operational": 3, 00:14:52.569 "base_bdevs_list": [ 00:14:52.569 { 00:14:52.569 "name": null, 00:14:52.569 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:52.569 "is_configured": false, 00:14:52.569 "data_offset": 0, 00:14:52.569 "data_size": 63488 00:14:52.569 }, 00:14:52.569 { 00:14:52.569 "name": "BaseBdev2", 00:14:52.569 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:52.569 "is_configured": true, 00:14:52.569 "data_offset": 2048, 00:14:52.569 "data_size": 63488 00:14:52.569 }, 00:14:52.569 { 00:14:52.569 "name": "BaseBdev3", 00:14:52.569 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:52.569 "is_configured": true, 00:14:52.569 "data_offset": 2048, 00:14:52.569 "data_size": 63488 00:14:52.569 } 00:14:52.569 ] 00:14:52.569 }' 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.569 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.829 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.829 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.829 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84b76c98-f5f6-4952-a940-3941ed615304 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.089 [2024-12-12 09:28:26.961212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:53.089 [2024-12-12 09:28:26.961534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:53.089 [2024-12-12 09:28:26.961589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:53.089 [2024-12-12 09:28:26.961874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:53.089 NewBaseBdev 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.089 [2024-12-12 09:28:26.966949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:53.089 [2024-12-12 09:28:26.967022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:53.089 [2024-12-12 09:28:26.967246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.089 09:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.089 [ 00:14:53.089 { 00:14:53.089 "name": "NewBaseBdev", 00:14:53.089 "aliases": [ 00:14:53.089 "84b76c98-f5f6-4952-a940-3941ed615304" 00:14:53.089 ], 00:14:53.089 "product_name": "Malloc disk", 00:14:53.089 "block_size": 512, 00:14:53.089 "num_blocks": 65536, 00:14:53.089 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:53.089 "assigned_rate_limits": { 00:14:53.089 "rw_ios_per_sec": 0, 00:14:53.089 "rw_mbytes_per_sec": 0, 00:14:53.089 "r_mbytes_per_sec": 0, 00:14:53.089 "w_mbytes_per_sec": 0 00:14:53.089 }, 00:14:53.089 "claimed": true, 00:14:53.089 "claim_type": "exclusive_write", 00:14:53.089 "zoned": false, 00:14:53.089 "supported_io_types": { 00:14:53.089 "read": true, 00:14:53.089 "write": true, 00:14:53.089 "unmap": true, 00:14:53.089 "flush": true, 00:14:53.089 "reset": true, 00:14:53.089 "nvme_admin": false, 00:14:53.089 "nvme_io": false, 00:14:53.089 "nvme_io_md": false, 00:14:53.089 "write_zeroes": true, 00:14:53.089 "zcopy": true, 00:14:53.089 "get_zone_info": false, 00:14:53.089 "zone_management": false, 00:14:53.089 "zone_append": false, 00:14:53.089 "compare": false, 00:14:53.089 "compare_and_write": false, 00:14:53.089 "abort": true, 00:14:53.089 "seek_hole": false, 00:14:53.089 "seek_data": false, 00:14:53.089 "copy": true, 00:14:53.089 "nvme_iov_md": false 00:14:53.089 }, 00:14:53.089 "memory_domains": [ 00:14:53.089 { 00:14:53.089 "dma_device_id": "system", 00:14:53.089 "dma_device_type": 1 00:14:53.089 }, 00:14:53.089 { 00:14:53.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.089 "dma_device_type": 2 00:14:53.089 } 00:14:53.089 ], 00:14:53.089 "driver_specific": {} 00:14:53.089 } 00:14:53.089 ] 00:14:53.089 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.089 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.090 "name": "Existed_Raid", 00:14:53.090 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:53.090 "strip_size_kb": 64, 00:14:53.090 "state": "online", 00:14:53.090 "raid_level": "raid5f", 00:14:53.090 "superblock": true, 00:14:53.090 "num_base_bdevs": 3, 00:14:53.090 "num_base_bdevs_discovered": 3, 00:14:53.090 "num_base_bdevs_operational": 3, 00:14:53.090 "base_bdevs_list": [ 00:14:53.090 { 00:14:53.090 "name": "NewBaseBdev", 00:14:53.090 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:53.090 "is_configured": true, 00:14:53.090 "data_offset": 2048, 00:14:53.090 "data_size": 63488 00:14:53.090 }, 00:14:53.090 { 00:14:53.090 "name": "BaseBdev2", 00:14:53.090 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:53.090 "is_configured": true, 00:14:53.090 "data_offset": 2048, 00:14:53.090 "data_size": 63488 00:14:53.090 }, 00:14:53.090 { 00:14:53.090 "name": "BaseBdev3", 00:14:53.090 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:53.090 "is_configured": true, 00:14:53.090 "data_offset": 2048, 00:14:53.090 "data_size": 63488 00:14:53.090 } 00:14:53.090 ] 00:14:53.090 }' 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.090 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.659 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.660 [2024-12-12 09:28:27.449243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.660 "name": "Existed_Raid", 00:14:53.660 "aliases": [ 00:14:53.660 "e593bbfc-d095-4a05-9b6a-670f6f813b5d" 00:14:53.660 ], 00:14:53.660 "product_name": "Raid Volume", 00:14:53.660 "block_size": 512, 00:14:53.660 "num_blocks": 126976, 00:14:53.660 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:53.660 "assigned_rate_limits": { 00:14:53.660 "rw_ios_per_sec": 0, 00:14:53.660 "rw_mbytes_per_sec": 0, 00:14:53.660 "r_mbytes_per_sec": 0, 00:14:53.660 "w_mbytes_per_sec": 0 00:14:53.660 }, 00:14:53.660 "claimed": false, 00:14:53.660 "zoned": false, 00:14:53.660 "supported_io_types": { 00:14:53.660 "read": true, 00:14:53.660 "write": true, 00:14:53.660 "unmap": false, 00:14:53.660 "flush": false, 00:14:53.660 "reset": true, 00:14:53.660 "nvme_admin": false, 00:14:53.660 "nvme_io": false, 00:14:53.660 "nvme_io_md": false, 00:14:53.660 "write_zeroes": true, 00:14:53.660 "zcopy": false, 00:14:53.660 "get_zone_info": false, 00:14:53.660 "zone_management": false, 00:14:53.660 "zone_append": false, 00:14:53.660 "compare": false, 00:14:53.660 "compare_and_write": false, 00:14:53.660 "abort": false, 00:14:53.660 "seek_hole": false, 00:14:53.660 "seek_data": false, 00:14:53.660 "copy": false, 00:14:53.660 "nvme_iov_md": false 00:14:53.660 }, 00:14:53.660 "driver_specific": { 00:14:53.660 "raid": { 00:14:53.660 "uuid": "e593bbfc-d095-4a05-9b6a-670f6f813b5d", 00:14:53.660 "strip_size_kb": 64, 00:14:53.660 "state": "online", 00:14:53.660 "raid_level": "raid5f", 00:14:53.660 "superblock": true, 00:14:53.660 "num_base_bdevs": 3, 00:14:53.660 "num_base_bdevs_discovered": 3, 00:14:53.660 "num_base_bdevs_operational": 3, 00:14:53.660 "base_bdevs_list": [ 00:14:53.660 { 00:14:53.660 "name": "NewBaseBdev", 00:14:53.660 "uuid": "84b76c98-f5f6-4952-a940-3941ed615304", 00:14:53.660 "is_configured": true, 00:14:53.660 "data_offset": 2048, 00:14:53.660 "data_size": 63488 00:14:53.660 }, 00:14:53.660 { 00:14:53.660 "name": "BaseBdev2", 00:14:53.660 "uuid": "8b86bdf3-3de5-4e09-a879-c34a3c61a407", 00:14:53.660 "is_configured": true, 00:14:53.660 "data_offset": 2048, 00:14:53.660 "data_size": 63488 00:14:53.660 }, 00:14:53.660 { 00:14:53.660 "name": "BaseBdev3", 00:14:53.660 "uuid": "6dbb7a3f-2638-41b0-b43e-b76a1c0ba6ed", 00:14:53.660 "is_configured": true, 00:14:53.660 "data_offset": 2048, 00:14:53.660 "data_size": 63488 00:14:53.660 } 00:14:53.660 ] 00:14:53.660 } 00:14:53.660 } 00:14:53.660 }' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:53.660 BaseBdev2 00:14:53.660 BaseBdev3' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.660 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.920 [2024-12-12 09:28:27.736580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.920 [2024-12-12 09:28:27.736644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.920 [2024-12-12 09:28:27.736729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.920 [2024-12-12 09:28:27.737047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.920 [2024-12-12 09:28:27.737064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81647 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81647 ']' 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81647 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81647 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81647' 00:14:53.920 killing process with pid 81647 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81647 00:14:53.920 [2024-12-12 09:28:27.785313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.920 09:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81647 00:14:54.179 [2024-12-12 09:28:28.095898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.560 09:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.560 00:14:55.560 real 0m10.937s 00:14:55.560 user 0m17.210s 00:14:55.560 sys 0m2.125s 00:14:55.560 09:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.560 09:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.560 ************************************ 00:14:55.560 END TEST raid5f_state_function_test_sb 00:14:55.560 ************************************ 00:14:55.560 09:28:29 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:55.560 09:28:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:55.560 09:28:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.560 09:28:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.560 ************************************ 00:14:55.560 START TEST raid5f_superblock_test 00:14:55.560 ************************************ 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82273 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82273 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82273 ']' 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.560 09:28:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.560 [2024-12-12 09:28:29.450326] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:55.560 [2024-12-12 09:28:29.450573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82273 ] 00:14:55.820 [2024-12-12 09:28:29.629571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.820 [2024-12-12 09:28:29.761189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.079 [2024-12-12 09:28:29.987232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.079 [2024-12-12 09:28:29.987272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.339 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.340 malloc1 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.340 [2024-12-12 09:28:30.314101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.340 [2024-12-12 09:28:30.314231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.340 [2024-12-12 09:28:30.314288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.340 [2024-12-12 09:28:30.314317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.340 [2024-12-12 09:28:30.316657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.340 [2024-12-12 09:28:30.316733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.340 pt1 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.340 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 malloc2 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 [2024-12-12 09:28:30.377134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.600 [2024-12-12 09:28:30.377244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.600 [2024-12-12 09:28:30.377286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.600 [2024-12-12 09:28:30.377295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.600 [2024-12-12 09:28:30.379666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.600 [2024-12-12 09:28:30.379733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.600 pt2 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 malloc3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 [2024-12-12 09:28:30.466224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.600 [2024-12-12 09:28:30.466323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.600 [2024-12-12 09:28:30.466380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.600 [2024-12-12 09:28:30.466408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.600 [2024-12-12 09:28:30.468815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.600 [2024-12-12 09:28:30.468918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.600 pt3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 [2024-12-12 09:28:30.478268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.600 [2024-12-12 09:28:30.480415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.600 [2024-12-12 09:28:30.480539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.600 [2024-12-12 09:28:30.480753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:56.600 [2024-12-12 09:28:30.480816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:56.600 [2024-12-12 09:28:30.481088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:56.600 [2024-12-12 09:28:30.487022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:56.600 [2024-12-12 09:28:30.487075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:56.600 [2024-12-12 09:28:30.487356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.600 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.600 "name": "raid_bdev1", 00:14:56.600 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:56.600 "strip_size_kb": 64, 00:14:56.600 "state": "online", 00:14:56.600 "raid_level": "raid5f", 00:14:56.600 "superblock": true, 00:14:56.600 "num_base_bdevs": 3, 00:14:56.600 "num_base_bdevs_discovered": 3, 00:14:56.600 "num_base_bdevs_operational": 3, 00:14:56.600 "base_bdevs_list": [ 00:14:56.600 { 00:14:56.600 "name": "pt1", 00:14:56.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.600 "is_configured": true, 00:14:56.600 "data_offset": 2048, 00:14:56.600 "data_size": 63488 00:14:56.600 }, 00:14:56.600 { 00:14:56.600 "name": "pt2", 00:14:56.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.600 "is_configured": true, 00:14:56.600 "data_offset": 2048, 00:14:56.600 "data_size": 63488 00:14:56.600 }, 00:14:56.600 { 00:14:56.600 "name": "pt3", 00:14:56.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.600 "is_configured": true, 00:14:56.600 "data_offset": 2048, 00:14:56.600 "data_size": 63488 00:14:56.600 } 00:14:56.600 ] 00:14:56.600 }' 00:14:56.601 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.601 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.172 [2024-12-12 09:28:30.950079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.172 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.172 "name": "raid_bdev1", 00:14:57.172 "aliases": [ 00:14:57.172 "9a16b81a-8e9c-4753-b80d-2d8148961fb5" 00:14:57.172 ], 00:14:57.172 "product_name": "Raid Volume", 00:14:57.172 "block_size": 512, 00:14:57.172 "num_blocks": 126976, 00:14:57.172 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:57.172 "assigned_rate_limits": { 00:14:57.172 "rw_ios_per_sec": 0, 00:14:57.172 "rw_mbytes_per_sec": 0, 00:14:57.172 "r_mbytes_per_sec": 0, 00:14:57.172 "w_mbytes_per_sec": 0 00:14:57.172 }, 00:14:57.172 "claimed": false, 00:14:57.172 "zoned": false, 00:14:57.172 "supported_io_types": { 00:14:57.172 "read": true, 00:14:57.172 "write": true, 00:14:57.173 "unmap": false, 00:14:57.173 "flush": false, 00:14:57.173 "reset": true, 00:14:57.173 "nvme_admin": false, 00:14:57.173 "nvme_io": false, 00:14:57.173 "nvme_io_md": false, 00:14:57.173 "write_zeroes": true, 00:14:57.173 "zcopy": false, 00:14:57.173 "get_zone_info": false, 00:14:57.173 "zone_management": false, 00:14:57.173 "zone_append": false, 00:14:57.173 "compare": false, 00:14:57.173 "compare_and_write": false, 00:14:57.173 "abort": false, 00:14:57.173 "seek_hole": false, 00:14:57.173 "seek_data": false, 00:14:57.173 "copy": false, 00:14:57.173 "nvme_iov_md": false 00:14:57.173 }, 00:14:57.173 "driver_specific": { 00:14:57.173 "raid": { 00:14:57.173 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:57.173 "strip_size_kb": 64, 00:14:57.173 "state": "online", 00:14:57.173 "raid_level": "raid5f", 00:14:57.173 "superblock": true, 00:14:57.173 "num_base_bdevs": 3, 00:14:57.173 "num_base_bdevs_discovered": 3, 00:14:57.173 "num_base_bdevs_operational": 3, 00:14:57.173 "base_bdevs_list": [ 00:14:57.173 { 00:14:57.173 "name": "pt1", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "name": "pt2", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "name": "pt3", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 } 00:14:57.173 ] 00:14:57.173 } 00:14:57.173 } 00:14:57.173 }' 00:14:57.173 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.174 09:28:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.174 pt2 00:14:57.174 pt3' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.174 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.435 [2024-12-12 09:28:31.217570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a16b81a-8e9c-4753-b80d-2d8148961fb5 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a16b81a-8e9c-4753-b80d-2d8148961fb5 ']' 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.435 [2024-12-12 09:28:31.261344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.435 [2024-12-12 09:28:31.261408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.435 [2024-12-12 09:28:31.261509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.435 [2024-12-12 09:28:31.261609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.435 [2024-12-12 09:28:31.261692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:57.435 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.436 [2024-12-12 09:28:31.393204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:57.436 [2024-12-12 09:28:31.395246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:57.436 [2024-12-12 09:28:31.395336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:57.436 [2024-12-12 09:28:31.395402] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:57.436 [2024-12-12 09:28:31.395515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:57.436 [2024-12-12 09:28:31.395594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:57.436 [2024-12-12 09:28:31.395646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.436 [2024-12-12 09:28:31.395715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:57.436 request: 00:14:57.436 { 00:14:57.436 "name": "raid_bdev1", 00:14:57.436 "raid_level": "raid5f", 00:14:57.436 "base_bdevs": [ 00:14:57.436 "malloc1", 00:14:57.436 "malloc2", 00:14:57.436 "malloc3" 00:14:57.436 ], 00:14:57.436 "strip_size_kb": 64, 00:14:57.436 "superblock": false, 00:14:57.436 "method": "bdev_raid_create", 00:14:57.436 "req_id": 1 00:14:57.436 } 00:14:57.436 Got JSON-RPC error response 00:14:57.436 response: 00:14:57.436 { 00:14:57.436 "code": -17, 00:14:57.436 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:57.436 } 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.436 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.695 [2024-12-12 09:28:31.461040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.695 [2024-12-12 09:28:31.461137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.695 [2024-12-12 09:28:31.461171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:57.695 [2024-12-12 09:28:31.461198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.695 [2024-12-12 09:28:31.463547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.695 [2024-12-12 09:28:31.463612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.695 [2024-12-12 09:28:31.463720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:57.695 [2024-12-12 09:28:31.463808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.695 pt1 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.695 "name": "raid_bdev1", 00:14:57.695 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:57.695 "strip_size_kb": 64, 00:14:57.695 "state": "configuring", 00:14:57.695 "raid_level": "raid5f", 00:14:57.695 "superblock": true, 00:14:57.695 "num_base_bdevs": 3, 00:14:57.695 "num_base_bdevs_discovered": 1, 00:14:57.695 "num_base_bdevs_operational": 3, 00:14:57.695 "base_bdevs_list": [ 00:14:57.695 { 00:14:57.695 "name": "pt1", 00:14:57.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.695 "is_configured": true, 00:14:57.695 "data_offset": 2048, 00:14:57.695 "data_size": 63488 00:14:57.695 }, 00:14:57.695 { 00:14:57.695 "name": null, 00:14:57.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.695 "is_configured": false, 00:14:57.695 "data_offset": 2048, 00:14:57.695 "data_size": 63488 00:14:57.695 }, 00:14:57.695 { 00:14:57.695 "name": null, 00:14:57.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.695 "is_configured": false, 00:14:57.695 "data_offset": 2048, 00:14:57.695 "data_size": 63488 00:14:57.695 } 00:14:57.695 ] 00:14:57.695 }' 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.695 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.953 [2024-12-12 09:28:31.944164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.953 [2024-12-12 09:28:31.944264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.953 [2024-12-12 09:28:31.944297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:57.953 [2024-12-12 09:28:31.944323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.953 [2024-12-12 09:28:31.944710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.953 [2024-12-12 09:28:31.944770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.953 [2024-12-12 09:28:31.944861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:57.953 [2024-12-12 09:28:31.944914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.953 pt2 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.953 [2024-12-12 09:28:31.956163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.953 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.954 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.212 09:28:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.212 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.212 "name": "raid_bdev1", 00:14:58.212 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:58.212 "strip_size_kb": 64, 00:14:58.212 "state": "configuring", 00:14:58.212 "raid_level": "raid5f", 00:14:58.212 "superblock": true, 00:14:58.212 "num_base_bdevs": 3, 00:14:58.212 "num_base_bdevs_discovered": 1, 00:14:58.212 "num_base_bdevs_operational": 3, 00:14:58.212 "base_bdevs_list": [ 00:14:58.212 { 00:14:58.212 "name": "pt1", 00:14:58.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.212 "is_configured": true, 00:14:58.212 "data_offset": 2048, 00:14:58.212 "data_size": 63488 00:14:58.212 }, 00:14:58.212 { 00:14:58.212 "name": null, 00:14:58.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.212 "is_configured": false, 00:14:58.212 "data_offset": 0, 00:14:58.212 "data_size": 63488 00:14:58.212 }, 00:14:58.212 { 00:14:58.212 "name": null, 00:14:58.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.212 "is_configured": false, 00:14:58.212 "data_offset": 2048, 00:14:58.212 "data_size": 63488 00:14:58.212 } 00:14:58.212 ] 00:14:58.212 }' 00:14:58.212 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.212 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.471 [2024-12-12 09:28:32.411749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.471 [2024-12-12 09:28:32.411843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.471 [2024-12-12 09:28:32.411874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:58.471 [2024-12-12 09:28:32.411903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.471 [2024-12-12 09:28:32.412302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.471 [2024-12-12 09:28:32.412366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.471 [2024-12-12 09:28:32.412457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.471 [2024-12-12 09:28:32.412506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.471 pt2 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.471 [2024-12-12 09:28:32.423719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.471 [2024-12-12 09:28:32.423803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.471 [2024-12-12 09:28:32.423840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:58.471 [2024-12-12 09:28:32.423871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.471 [2024-12-12 09:28:32.424253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.471 [2024-12-12 09:28:32.424315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.471 [2024-12-12 09:28:32.424399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.471 [2024-12-12 09:28:32.424445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.471 [2024-12-12 09:28:32.424592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:58.471 [2024-12-12 09:28:32.424632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.471 [2024-12-12 09:28:32.424888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:58.471 [2024-12-12 09:28:32.429900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:58.471 [2024-12-12 09:28:32.429954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:58.471 [2024-12-12 09:28:32.430167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.471 pt3 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.471 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.472 "name": "raid_bdev1", 00:14:58.472 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:58.472 "strip_size_kb": 64, 00:14:58.472 "state": "online", 00:14:58.472 "raid_level": "raid5f", 00:14:58.472 "superblock": true, 00:14:58.472 "num_base_bdevs": 3, 00:14:58.472 "num_base_bdevs_discovered": 3, 00:14:58.472 "num_base_bdevs_operational": 3, 00:14:58.472 "base_bdevs_list": [ 00:14:58.472 { 00:14:58.472 "name": "pt1", 00:14:58.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.472 "is_configured": true, 00:14:58.472 "data_offset": 2048, 00:14:58.472 "data_size": 63488 00:14:58.472 }, 00:14:58.472 { 00:14:58.472 "name": "pt2", 00:14:58.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.472 "is_configured": true, 00:14:58.472 "data_offset": 2048, 00:14:58.472 "data_size": 63488 00:14:58.472 }, 00:14:58.472 { 00:14:58.472 "name": "pt3", 00:14:58.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.472 "is_configured": true, 00:14:58.472 "data_offset": 2048, 00:14:58.472 "data_size": 63488 00:14:58.472 } 00:14:58.472 ] 00:14:58.472 }' 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.472 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.041 [2024-12-12 09:28:32.896713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.041 "name": "raid_bdev1", 00:14:59.041 "aliases": [ 00:14:59.041 "9a16b81a-8e9c-4753-b80d-2d8148961fb5" 00:14:59.041 ], 00:14:59.041 "product_name": "Raid Volume", 00:14:59.041 "block_size": 512, 00:14:59.041 "num_blocks": 126976, 00:14:59.041 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:59.041 "assigned_rate_limits": { 00:14:59.041 "rw_ios_per_sec": 0, 00:14:59.041 "rw_mbytes_per_sec": 0, 00:14:59.041 "r_mbytes_per_sec": 0, 00:14:59.041 "w_mbytes_per_sec": 0 00:14:59.041 }, 00:14:59.041 "claimed": false, 00:14:59.041 "zoned": false, 00:14:59.041 "supported_io_types": { 00:14:59.041 "read": true, 00:14:59.041 "write": true, 00:14:59.041 "unmap": false, 00:14:59.041 "flush": false, 00:14:59.041 "reset": true, 00:14:59.041 "nvme_admin": false, 00:14:59.041 "nvme_io": false, 00:14:59.041 "nvme_io_md": false, 00:14:59.041 "write_zeroes": true, 00:14:59.041 "zcopy": false, 00:14:59.041 "get_zone_info": false, 00:14:59.041 "zone_management": false, 00:14:59.041 "zone_append": false, 00:14:59.041 "compare": false, 00:14:59.041 "compare_and_write": false, 00:14:59.041 "abort": false, 00:14:59.041 "seek_hole": false, 00:14:59.041 "seek_data": false, 00:14:59.041 "copy": false, 00:14:59.041 "nvme_iov_md": false 00:14:59.041 }, 00:14:59.041 "driver_specific": { 00:14:59.041 "raid": { 00:14:59.041 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:59.041 "strip_size_kb": 64, 00:14:59.041 "state": "online", 00:14:59.041 "raid_level": "raid5f", 00:14:59.041 "superblock": true, 00:14:59.041 "num_base_bdevs": 3, 00:14:59.041 "num_base_bdevs_discovered": 3, 00:14:59.041 "num_base_bdevs_operational": 3, 00:14:59.041 "base_bdevs_list": [ 00:14:59.041 { 00:14:59.041 "name": "pt1", 00:14:59.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.041 "is_configured": true, 00:14:59.041 "data_offset": 2048, 00:14:59.041 "data_size": 63488 00:14:59.041 }, 00:14:59.041 { 00:14:59.041 "name": "pt2", 00:14:59.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.041 "is_configured": true, 00:14:59.041 "data_offset": 2048, 00:14:59.041 "data_size": 63488 00:14:59.041 }, 00:14:59.041 { 00:14:59.041 "name": "pt3", 00:14:59.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.041 "is_configured": true, 00:14:59.041 "data_offset": 2048, 00:14:59.041 "data_size": 63488 00:14:59.041 } 00:14:59.041 ] 00:14:59.041 } 00:14:59.041 } 00:14:59.041 }' 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:59.041 pt2 00:14:59.041 pt3' 00:14:59.041 09:28:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.041 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.301 [2024-12-12 09:28:33.164214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a16b81a-8e9c-4753-b80d-2d8148961fb5 '!=' 9a16b81a-8e9c-4753-b80d-2d8148961fb5 ']' 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.301 [2024-12-12 09:28:33.208100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.301 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.302 "name": "raid_bdev1", 00:14:59.302 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:59.302 "strip_size_kb": 64, 00:14:59.302 "state": "online", 00:14:59.302 "raid_level": "raid5f", 00:14:59.302 "superblock": true, 00:14:59.302 "num_base_bdevs": 3, 00:14:59.302 "num_base_bdevs_discovered": 2, 00:14:59.302 "num_base_bdevs_operational": 2, 00:14:59.302 "base_bdevs_list": [ 00:14:59.302 { 00:14:59.302 "name": null, 00:14:59.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.302 "is_configured": false, 00:14:59.302 "data_offset": 0, 00:14:59.302 "data_size": 63488 00:14:59.302 }, 00:14:59.302 { 00:14:59.302 "name": "pt2", 00:14:59.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.302 "is_configured": true, 00:14:59.302 "data_offset": 2048, 00:14:59.302 "data_size": 63488 00:14:59.302 }, 00:14:59.302 { 00:14:59.302 "name": "pt3", 00:14:59.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.302 "is_configured": true, 00:14:59.302 "data_offset": 2048, 00:14:59.302 "data_size": 63488 00:14:59.302 } 00:14:59.302 ] 00:14:59.302 }' 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.302 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 [2024-12-12 09:28:33.655284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.871 [2024-12-12 09:28:33.655359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.871 [2024-12-12 09:28:33.655451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.871 [2024-12-12 09:28:33.655512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.871 [2024-12-12 09:28:33.655558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 [2024-12-12 09:28:33.743119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.871 [2024-12-12 09:28:33.743208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.871 [2024-12-12 09:28:33.743253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:59.871 [2024-12-12 09:28:33.743282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.871 [2024-12-12 09:28:33.745632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.871 [2024-12-12 09:28:33.745721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.871 [2024-12-12 09:28:33.745810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:59.871 [2024-12-12 09:28:33.745895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.871 pt2 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.871 "name": "raid_bdev1", 00:14:59.871 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:14:59.871 "strip_size_kb": 64, 00:14:59.871 "state": "configuring", 00:14:59.871 "raid_level": "raid5f", 00:14:59.871 "superblock": true, 00:14:59.871 "num_base_bdevs": 3, 00:14:59.871 "num_base_bdevs_discovered": 1, 00:14:59.871 "num_base_bdevs_operational": 2, 00:14:59.871 "base_bdevs_list": [ 00:14:59.871 { 00:14:59.871 "name": null, 00:14:59.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.871 "is_configured": false, 00:14:59.871 "data_offset": 2048, 00:14:59.871 "data_size": 63488 00:14:59.871 }, 00:14:59.871 { 00:14:59.871 "name": "pt2", 00:14:59.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.871 "is_configured": true, 00:14:59.871 "data_offset": 2048, 00:14:59.871 "data_size": 63488 00:14:59.871 }, 00:14:59.871 { 00:14:59.871 "name": null, 00:14:59.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.871 "is_configured": false, 00:14:59.871 "data_offset": 2048, 00:14:59.871 "data_size": 63488 00:14:59.871 } 00:14:59.871 ] 00:14:59.871 }' 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.871 09:28:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.441 [2024-12-12 09:28:34.210321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.441 [2024-12-12 09:28:34.210430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.441 [2024-12-12 09:28:34.210464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:00.441 [2024-12-12 09:28:34.210493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.441 [2024-12-12 09:28:34.210908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.441 [2024-12-12 09:28:34.210941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.441 [2024-12-12 09:28:34.211013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:00.441 [2024-12-12 09:28:34.211038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.441 [2024-12-12 09:28:34.211152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:00.441 [2024-12-12 09:28:34.211164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.441 [2024-12-12 09:28:34.211411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:00.441 pt3 00:15:00.441 [2024-12-12 09:28:34.216421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:00.441 [2024-12-12 09:28:34.216442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:00.441 [2024-12-12 09:28:34.216710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.441 "name": "raid_bdev1", 00:15:00.441 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:15:00.441 "strip_size_kb": 64, 00:15:00.441 "state": "online", 00:15:00.441 "raid_level": "raid5f", 00:15:00.441 "superblock": true, 00:15:00.441 "num_base_bdevs": 3, 00:15:00.441 "num_base_bdevs_discovered": 2, 00:15:00.441 "num_base_bdevs_operational": 2, 00:15:00.441 "base_bdevs_list": [ 00:15:00.441 { 00:15:00.441 "name": null, 00:15:00.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.441 "is_configured": false, 00:15:00.441 "data_offset": 2048, 00:15:00.441 "data_size": 63488 00:15:00.441 }, 00:15:00.441 { 00:15:00.441 "name": "pt2", 00:15:00.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.441 "is_configured": true, 00:15:00.441 "data_offset": 2048, 00:15:00.441 "data_size": 63488 00:15:00.441 }, 00:15:00.441 { 00:15:00.441 "name": "pt3", 00:15:00.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.441 "is_configured": true, 00:15:00.441 "data_offset": 2048, 00:15:00.441 "data_size": 63488 00:15:00.441 } 00:15:00.441 ] 00:15:00.441 }' 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.441 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.701 [2024-12-12 09:28:34.643237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.701 [2024-12-12 09:28:34.643308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.701 [2024-12-12 09:28:34.643402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.701 [2024-12-12 09:28:34.643486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.701 [2024-12-12 09:28:34.643528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.701 [2024-12-12 09:28:34.715149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.701 [2024-12-12 09:28:34.715241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.701 [2024-12-12 09:28:34.715284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:00.701 [2024-12-12 09:28:34.715313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.701 [2024-12-12 09:28:34.717687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.701 [2024-12-12 09:28:34.717758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.701 [2024-12-12 09:28:34.717863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.701 [2024-12-12 09:28:34.717932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.701 [2024-12-12 09:28:34.718135] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:00.701 [2024-12-12 09:28:34.718195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.701 [2024-12-12 09:28:34.718234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:00.701 [2024-12-12 09:28:34.718331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.701 pt1 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.701 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.702 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.702 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.962 "name": "raid_bdev1", 00:15:00.962 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:15:00.962 "strip_size_kb": 64, 00:15:00.962 "state": "configuring", 00:15:00.962 "raid_level": "raid5f", 00:15:00.962 "superblock": true, 00:15:00.962 "num_base_bdevs": 3, 00:15:00.962 "num_base_bdevs_discovered": 1, 00:15:00.962 "num_base_bdevs_operational": 2, 00:15:00.962 "base_bdevs_list": [ 00:15:00.962 { 00:15:00.962 "name": null, 00:15:00.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.962 "is_configured": false, 00:15:00.962 "data_offset": 2048, 00:15:00.962 "data_size": 63488 00:15:00.962 }, 00:15:00.962 { 00:15:00.962 "name": "pt2", 00:15:00.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.962 "is_configured": true, 00:15:00.962 "data_offset": 2048, 00:15:00.962 "data_size": 63488 00:15:00.962 }, 00:15:00.962 { 00:15:00.962 "name": null, 00:15:00.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.962 "is_configured": false, 00:15:00.962 "data_offset": 2048, 00:15:00.962 "data_size": 63488 00:15:00.962 } 00:15:00.962 ] 00:15:00.962 }' 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.962 09:28:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.222 [2024-12-12 09:28:35.162365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:01.222 [2024-12-12 09:28:35.162450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.222 [2024-12-12 09:28:35.162499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:01.222 [2024-12-12 09:28:35.162525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.222 [2024-12-12 09:28:35.162958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.222 [2024-12-12 09:28:35.163033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:01.222 [2024-12-12 09:28:35.163125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:01.222 [2024-12-12 09:28:35.163148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.222 [2024-12-12 09:28:35.163274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:01.222 [2024-12-12 09:28:35.163282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.222 [2024-12-12 09:28:35.163538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:01.222 [2024-12-12 09:28:35.168732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:01.222 [2024-12-12 09:28:35.168759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:01.222 [2024-12-12 09:28:35.169010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.222 pt3 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.222 "name": "raid_bdev1", 00:15:01.222 "uuid": "9a16b81a-8e9c-4753-b80d-2d8148961fb5", 00:15:01.222 "strip_size_kb": 64, 00:15:01.222 "state": "online", 00:15:01.222 "raid_level": "raid5f", 00:15:01.222 "superblock": true, 00:15:01.222 "num_base_bdevs": 3, 00:15:01.222 "num_base_bdevs_discovered": 2, 00:15:01.222 "num_base_bdevs_operational": 2, 00:15:01.222 "base_bdevs_list": [ 00:15:01.222 { 00:15:01.222 "name": null, 00:15:01.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.222 "is_configured": false, 00:15:01.222 "data_offset": 2048, 00:15:01.222 "data_size": 63488 00:15:01.222 }, 00:15:01.222 { 00:15:01.222 "name": "pt2", 00:15:01.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.222 "is_configured": true, 00:15:01.222 "data_offset": 2048, 00:15:01.222 "data_size": 63488 00:15:01.222 }, 00:15:01.222 { 00:15:01.222 "name": "pt3", 00:15:01.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.222 "is_configured": true, 00:15:01.222 "data_offset": 2048, 00:15:01.222 "data_size": 63488 00:15:01.222 } 00:15:01.222 ] 00:15:01.222 }' 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.222 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.790 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.791 [2024-12-12 09:28:35.675335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9a16b81a-8e9c-4753-b80d-2d8148961fb5 '!=' 9a16b81a-8e9c-4753-b80d-2d8148961fb5 ']' 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82273 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82273 ']' 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82273 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82273 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82273' 00:15:01.791 killing process with pid 82273 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 82273 00:15:01.791 [2024-12-12 09:28:35.757820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.791 [2024-12-12 09:28:35.757896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.791 [2024-12-12 09:28:35.757947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.791 [2024-12-12 09:28:35.757980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:01.791 09:28:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 82273 00:15:02.050 [2024-12-12 09:28:36.066926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.432 09:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:03.432 00:15:03.432 real 0m7.896s 00:15:03.432 user 0m12.148s 00:15:03.432 sys 0m1.585s 00:15:03.432 ************************************ 00:15:03.432 END TEST raid5f_superblock_test 00:15:03.432 ************************************ 00:15:03.432 09:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.432 09:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 09:28:37 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:03.432 09:28:37 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:03.432 09:28:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:03.432 09:28:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.432 09:28:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 ************************************ 00:15:03.432 START TEST raid5f_rebuild_test 00:15:03.432 ************************************ 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82711 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82711 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82711 ']' 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.432 09:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 [2024-12-12 09:28:37.430824] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:15:03.432 [2024-12-12 09:28:37.431037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.432 Zero copy mechanism will not be used. 00:15:03.432 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82711 ] 00:15:03.692 [2024-12-12 09:28:37.606084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.951 [2024-12-12 09:28:37.737371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.951 [2024-12-12 09:28:37.959990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.951 [2024-12-12 09:28:37.960031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 BaseBdev1_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 [2024-12-12 09:28:38.301750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.522 [2024-12-12 09:28:38.301910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.522 [2024-12-12 09:28:38.301954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.522 [2024-12-12 09:28:38.302002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.522 [2024-12-12 09:28:38.304267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.522 [2024-12-12 09:28:38.304342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.522 BaseBdev1 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 BaseBdev2_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 [2024-12-12 09:28:38.357325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.522 [2024-12-12 09:28:38.357394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.522 [2024-12-12 09:28:38.357415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.522 [2024-12-12 09:28:38.357426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.522 [2024-12-12 09:28:38.359824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.522 [2024-12-12 09:28:38.359871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.522 BaseBdev2 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 BaseBdev3_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 [2024-12-12 09:28:38.434002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:04.522 [2024-12-12 09:28:38.434119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.522 [2024-12-12 09:28:38.434176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:04.522 [2024-12-12 09:28:38.434210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.522 [2024-12-12 09:28:38.436578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.522 [2024-12-12 09:28:38.436678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:04.522 BaseBdev3 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 spare_malloc 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 spare_delay 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 [2024-12-12 09:28:38.505269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.522 [2024-12-12 09:28:38.505382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.522 [2024-12-12 09:28:38.505437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:04.522 [2024-12-12 09:28:38.505468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.522 [2024-12-12 09:28:38.507769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.522 [2024-12-12 09:28:38.507811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.522 spare 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.522 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.522 [2024-12-12 09:28:38.517319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.522 [2024-12-12 09:28:38.519405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.523 [2024-12-12 09:28:38.519506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.523 [2024-12-12 09:28:38.519625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:04.523 [2024-12-12 09:28:38.519673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:04.523 [2024-12-12 09:28:38.519951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:04.523 [2024-12-12 09:28:38.525655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:04.523 [2024-12-12 09:28:38.525712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:04.523 [2024-12-12 09:28:38.525921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.523 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.783 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.783 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.783 "name": "raid_bdev1", 00:15:04.783 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:04.783 "strip_size_kb": 64, 00:15:04.783 "state": "online", 00:15:04.783 "raid_level": "raid5f", 00:15:04.783 "superblock": false, 00:15:04.783 "num_base_bdevs": 3, 00:15:04.783 "num_base_bdevs_discovered": 3, 00:15:04.783 "num_base_bdevs_operational": 3, 00:15:04.783 "base_bdevs_list": [ 00:15:04.783 { 00:15:04.783 "name": "BaseBdev1", 00:15:04.783 "uuid": "21ec5093-1ced-5eed-a9e0-45213962afe5", 00:15:04.783 "is_configured": true, 00:15:04.783 "data_offset": 0, 00:15:04.783 "data_size": 65536 00:15:04.783 }, 00:15:04.783 { 00:15:04.783 "name": "BaseBdev2", 00:15:04.783 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:04.783 "is_configured": true, 00:15:04.783 "data_offset": 0, 00:15:04.783 "data_size": 65536 00:15:04.783 }, 00:15:04.783 { 00:15:04.783 "name": "BaseBdev3", 00:15:04.783 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:04.783 "is_configured": true, 00:15:04.783 "data_offset": 0, 00:15:04.783 "data_size": 65536 00:15:04.783 } 00:15:04.783 ] 00:15:04.783 }' 00:15:04.783 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.783 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.043 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:05.043 09:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.043 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.043 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.043 [2024-12-12 09:28:38.964386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.043 09:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.043 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:05.303 [2024-12-12 09:28:39.215853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:05.303 /dev/nbd0 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.303 1+0 records in 00:15:05.303 1+0 records out 00:15:05.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647694 s, 6.3 MB/s 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:05.303 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:06.240 512+0 records in 00:15:06.240 512+0 records out 00:15:06.240 67108864 bytes (67 MB, 64 MiB) copied, 0.625638 s, 107 MB/s 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.240 09:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:06.240 [2024-12-12 09:28:40.142239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.240 [2024-12-12 09:28:40.177706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.240 "name": "raid_bdev1", 00:15:06.240 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:06.240 "strip_size_kb": 64, 00:15:06.240 "state": "online", 00:15:06.240 "raid_level": "raid5f", 00:15:06.240 "superblock": false, 00:15:06.240 "num_base_bdevs": 3, 00:15:06.240 "num_base_bdevs_discovered": 2, 00:15:06.240 "num_base_bdevs_operational": 2, 00:15:06.240 "base_bdevs_list": [ 00:15:06.240 { 00:15:06.240 "name": null, 00:15:06.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.240 "is_configured": false, 00:15:06.240 "data_offset": 0, 00:15:06.240 "data_size": 65536 00:15:06.240 }, 00:15:06.240 { 00:15:06.240 "name": "BaseBdev2", 00:15:06.240 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:06.240 "is_configured": true, 00:15:06.240 "data_offset": 0, 00:15:06.240 "data_size": 65536 00:15:06.240 }, 00:15:06.240 { 00:15:06.240 "name": "BaseBdev3", 00:15:06.240 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:06.240 "is_configured": true, 00:15:06.240 "data_offset": 0, 00:15:06.240 "data_size": 65536 00:15:06.240 } 00:15:06.240 ] 00:15:06.240 }' 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.240 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.817 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.817 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.817 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.817 [2024-12-12 09:28:40.688844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.817 [2024-12-12 09:28:40.706021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:06.817 09:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.817 09:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.817 [2024-12-12 09:28:40.713463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.803 "name": "raid_bdev1", 00:15:07.803 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:07.803 "strip_size_kb": 64, 00:15:07.803 "state": "online", 00:15:07.803 "raid_level": "raid5f", 00:15:07.803 "superblock": false, 00:15:07.803 "num_base_bdevs": 3, 00:15:07.803 "num_base_bdevs_discovered": 3, 00:15:07.803 "num_base_bdevs_operational": 3, 00:15:07.803 "process": { 00:15:07.803 "type": "rebuild", 00:15:07.803 "target": "spare", 00:15:07.803 "progress": { 00:15:07.803 "blocks": 20480, 00:15:07.803 "percent": 15 00:15:07.803 } 00:15:07.803 }, 00:15:07.803 "base_bdevs_list": [ 00:15:07.803 { 00:15:07.803 "name": "spare", 00:15:07.803 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 }, 00:15:07.803 { 00:15:07.803 "name": "BaseBdev2", 00:15:07.803 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 }, 00:15:07.803 { 00:15:07.803 "name": "BaseBdev3", 00:15:07.803 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 } 00:15:07.803 ] 00:15:07.803 }' 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.803 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.062 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.062 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:08.062 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.062 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.062 [2024-12-12 09:28:41.860538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.063 [2024-12-12 09:28:41.922412] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.063 [2024-12-12 09:28:41.922510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.063 [2024-12-12 09:28:41.922548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.063 [2024-12-12 09:28:41.922556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.063 09:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.063 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.063 "name": "raid_bdev1", 00:15:08.063 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:08.063 "strip_size_kb": 64, 00:15:08.063 "state": "online", 00:15:08.063 "raid_level": "raid5f", 00:15:08.063 "superblock": false, 00:15:08.063 "num_base_bdevs": 3, 00:15:08.063 "num_base_bdevs_discovered": 2, 00:15:08.063 "num_base_bdevs_operational": 2, 00:15:08.063 "base_bdevs_list": [ 00:15:08.063 { 00:15:08.063 "name": null, 00:15:08.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.063 "is_configured": false, 00:15:08.063 "data_offset": 0, 00:15:08.063 "data_size": 65536 00:15:08.063 }, 00:15:08.063 { 00:15:08.063 "name": "BaseBdev2", 00:15:08.063 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:08.063 "is_configured": true, 00:15:08.063 "data_offset": 0, 00:15:08.063 "data_size": 65536 00:15:08.063 }, 00:15:08.063 { 00:15:08.063 "name": "BaseBdev3", 00:15:08.063 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:08.063 "is_configured": true, 00:15:08.063 "data_offset": 0, 00:15:08.063 "data_size": 65536 00:15:08.063 } 00:15:08.063 ] 00:15:08.063 }' 00:15:08.063 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.063 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.630 "name": "raid_bdev1", 00:15:08.630 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:08.630 "strip_size_kb": 64, 00:15:08.630 "state": "online", 00:15:08.630 "raid_level": "raid5f", 00:15:08.630 "superblock": false, 00:15:08.630 "num_base_bdevs": 3, 00:15:08.630 "num_base_bdevs_discovered": 2, 00:15:08.630 "num_base_bdevs_operational": 2, 00:15:08.630 "base_bdevs_list": [ 00:15:08.630 { 00:15:08.630 "name": null, 00:15:08.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.630 "is_configured": false, 00:15:08.630 "data_offset": 0, 00:15:08.630 "data_size": 65536 00:15:08.630 }, 00:15:08.630 { 00:15:08.630 "name": "BaseBdev2", 00:15:08.630 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:08.630 "is_configured": true, 00:15:08.630 "data_offset": 0, 00:15:08.630 "data_size": 65536 00:15:08.630 }, 00:15:08.630 { 00:15:08.630 "name": "BaseBdev3", 00:15:08.630 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:08.630 "is_configured": true, 00:15:08.630 "data_offset": 0, 00:15:08.630 "data_size": 65536 00:15:08.630 } 00:15:08.630 ] 00:15:08.630 }' 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.630 [2024-12-12 09:28:42.510076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.630 [2024-12-12 09:28:42.525713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.630 09:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:08.630 [2024-12-12 09:28:42.533074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.568 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.568 "name": "raid_bdev1", 00:15:09.568 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:09.568 "strip_size_kb": 64, 00:15:09.568 "state": "online", 00:15:09.568 "raid_level": "raid5f", 00:15:09.568 "superblock": false, 00:15:09.568 "num_base_bdevs": 3, 00:15:09.568 "num_base_bdevs_discovered": 3, 00:15:09.569 "num_base_bdevs_operational": 3, 00:15:09.569 "process": { 00:15:09.569 "type": "rebuild", 00:15:09.569 "target": "spare", 00:15:09.569 "progress": { 00:15:09.569 "blocks": 20480, 00:15:09.569 "percent": 15 00:15:09.569 } 00:15:09.569 }, 00:15:09.569 "base_bdevs_list": [ 00:15:09.569 { 00:15:09.569 "name": "spare", 00:15:09.569 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:09.569 "is_configured": true, 00:15:09.569 "data_offset": 0, 00:15:09.569 "data_size": 65536 00:15:09.569 }, 00:15:09.569 { 00:15:09.569 "name": "BaseBdev2", 00:15:09.569 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:09.569 "is_configured": true, 00:15:09.569 "data_offset": 0, 00:15:09.569 "data_size": 65536 00:15:09.569 }, 00:15:09.569 { 00:15:09.569 "name": "BaseBdev3", 00:15:09.569 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:09.569 "is_configured": true, 00:15:09.569 "data_offset": 0, 00:15:09.569 "data_size": 65536 00:15:09.569 } 00:15:09.569 ] 00:15:09.569 }' 00:15:09.569 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=549 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.828 "name": "raid_bdev1", 00:15:09.828 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:09.828 "strip_size_kb": 64, 00:15:09.828 "state": "online", 00:15:09.828 "raid_level": "raid5f", 00:15:09.828 "superblock": false, 00:15:09.828 "num_base_bdevs": 3, 00:15:09.828 "num_base_bdevs_discovered": 3, 00:15:09.828 "num_base_bdevs_operational": 3, 00:15:09.828 "process": { 00:15:09.828 "type": "rebuild", 00:15:09.828 "target": "spare", 00:15:09.828 "progress": { 00:15:09.828 "blocks": 22528, 00:15:09.828 "percent": 17 00:15:09.828 } 00:15:09.828 }, 00:15:09.828 "base_bdevs_list": [ 00:15:09.828 { 00:15:09.828 "name": "spare", 00:15:09.828 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:09.828 "is_configured": true, 00:15:09.828 "data_offset": 0, 00:15:09.828 "data_size": 65536 00:15:09.828 }, 00:15:09.828 { 00:15:09.828 "name": "BaseBdev2", 00:15:09.828 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:09.828 "is_configured": true, 00:15:09.828 "data_offset": 0, 00:15:09.828 "data_size": 65536 00:15:09.828 }, 00:15:09.828 { 00:15:09.828 "name": "BaseBdev3", 00:15:09.828 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:09.828 "is_configured": true, 00:15:09.828 "data_offset": 0, 00:15:09.828 "data_size": 65536 00:15:09.828 } 00:15:09.828 ] 00:15:09.828 }' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.828 09:28:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.207 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.207 "name": "raid_bdev1", 00:15:11.207 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:11.207 "strip_size_kb": 64, 00:15:11.207 "state": "online", 00:15:11.207 "raid_level": "raid5f", 00:15:11.207 "superblock": false, 00:15:11.207 "num_base_bdevs": 3, 00:15:11.207 "num_base_bdevs_discovered": 3, 00:15:11.207 "num_base_bdevs_operational": 3, 00:15:11.207 "process": { 00:15:11.207 "type": "rebuild", 00:15:11.207 "target": "spare", 00:15:11.207 "progress": { 00:15:11.207 "blocks": 47104, 00:15:11.207 "percent": 35 00:15:11.207 } 00:15:11.207 }, 00:15:11.207 "base_bdevs_list": [ 00:15:11.207 { 00:15:11.207 "name": "spare", 00:15:11.207 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:11.207 "is_configured": true, 00:15:11.207 "data_offset": 0, 00:15:11.207 "data_size": 65536 00:15:11.207 }, 00:15:11.208 { 00:15:11.208 "name": "BaseBdev2", 00:15:11.208 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:11.208 "is_configured": true, 00:15:11.208 "data_offset": 0, 00:15:11.208 "data_size": 65536 00:15:11.208 }, 00:15:11.208 { 00:15:11.208 "name": "BaseBdev3", 00:15:11.208 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:11.208 "is_configured": true, 00:15:11.208 "data_offset": 0, 00:15:11.208 "data_size": 65536 00:15:11.208 } 00:15:11.208 ] 00:15:11.208 }' 00:15:11.208 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.208 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.208 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.208 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.208 09:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.147 09:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.147 "name": "raid_bdev1", 00:15:12.147 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:12.147 "strip_size_kb": 64, 00:15:12.147 "state": "online", 00:15:12.147 "raid_level": "raid5f", 00:15:12.147 "superblock": false, 00:15:12.147 "num_base_bdevs": 3, 00:15:12.147 "num_base_bdevs_discovered": 3, 00:15:12.147 "num_base_bdevs_operational": 3, 00:15:12.147 "process": { 00:15:12.147 "type": "rebuild", 00:15:12.147 "target": "spare", 00:15:12.147 "progress": { 00:15:12.147 "blocks": 69632, 00:15:12.147 "percent": 53 00:15:12.147 } 00:15:12.147 }, 00:15:12.147 "base_bdevs_list": [ 00:15:12.147 { 00:15:12.147 "name": "spare", 00:15:12.147 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:12.147 "is_configured": true, 00:15:12.147 "data_offset": 0, 00:15:12.147 "data_size": 65536 00:15:12.147 }, 00:15:12.147 { 00:15:12.147 "name": "BaseBdev2", 00:15:12.147 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:12.147 "is_configured": true, 00:15:12.147 "data_offset": 0, 00:15:12.147 "data_size": 65536 00:15:12.147 }, 00:15:12.147 { 00:15:12.147 "name": "BaseBdev3", 00:15:12.147 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:12.147 "is_configured": true, 00:15:12.147 "data_offset": 0, 00:15:12.147 "data_size": 65536 00:15:12.147 } 00:15:12.147 ] 00:15:12.147 }' 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.147 09:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.524 "name": "raid_bdev1", 00:15:13.524 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:13.524 "strip_size_kb": 64, 00:15:13.524 "state": "online", 00:15:13.524 "raid_level": "raid5f", 00:15:13.524 "superblock": false, 00:15:13.524 "num_base_bdevs": 3, 00:15:13.524 "num_base_bdevs_discovered": 3, 00:15:13.524 "num_base_bdevs_operational": 3, 00:15:13.524 "process": { 00:15:13.524 "type": "rebuild", 00:15:13.524 "target": "spare", 00:15:13.524 "progress": { 00:15:13.524 "blocks": 92160, 00:15:13.524 "percent": 70 00:15:13.524 } 00:15:13.524 }, 00:15:13.524 "base_bdevs_list": [ 00:15:13.524 { 00:15:13.524 "name": "spare", 00:15:13.524 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:13.524 "is_configured": true, 00:15:13.524 "data_offset": 0, 00:15:13.524 "data_size": 65536 00:15:13.524 }, 00:15:13.524 { 00:15:13.524 "name": "BaseBdev2", 00:15:13.524 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:13.524 "is_configured": true, 00:15:13.524 "data_offset": 0, 00:15:13.524 "data_size": 65536 00:15:13.524 }, 00:15:13.524 { 00:15:13.524 "name": "BaseBdev3", 00:15:13.524 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:13.524 "is_configured": true, 00:15:13.524 "data_offset": 0, 00:15:13.524 "data_size": 65536 00:15:13.524 } 00:15:13.524 ] 00:15:13.524 }' 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.524 09:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.463 "name": "raid_bdev1", 00:15:14.463 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:14.463 "strip_size_kb": 64, 00:15:14.463 "state": "online", 00:15:14.463 "raid_level": "raid5f", 00:15:14.463 "superblock": false, 00:15:14.463 "num_base_bdevs": 3, 00:15:14.463 "num_base_bdevs_discovered": 3, 00:15:14.463 "num_base_bdevs_operational": 3, 00:15:14.463 "process": { 00:15:14.463 "type": "rebuild", 00:15:14.463 "target": "spare", 00:15:14.463 "progress": { 00:15:14.463 "blocks": 116736, 00:15:14.463 "percent": 89 00:15:14.463 } 00:15:14.463 }, 00:15:14.463 "base_bdevs_list": [ 00:15:14.463 { 00:15:14.463 "name": "spare", 00:15:14.463 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:14.463 "is_configured": true, 00:15:14.463 "data_offset": 0, 00:15:14.463 "data_size": 65536 00:15:14.463 }, 00:15:14.463 { 00:15:14.463 "name": "BaseBdev2", 00:15:14.463 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:14.463 "is_configured": true, 00:15:14.463 "data_offset": 0, 00:15:14.463 "data_size": 65536 00:15:14.463 }, 00:15:14.463 { 00:15:14.463 "name": "BaseBdev3", 00:15:14.463 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:14.463 "is_configured": true, 00:15:14.463 "data_offset": 0, 00:15:14.463 "data_size": 65536 00:15:14.463 } 00:15:14.463 ] 00:15:14.463 }' 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.463 09:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.031 [2024-12-12 09:28:48.976266] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:15.031 [2024-12-12 09:28:48.976342] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:15.031 [2024-12-12 09:28:48.976385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.599 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.599 "name": "raid_bdev1", 00:15:15.599 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:15.599 "strip_size_kb": 64, 00:15:15.599 "state": "online", 00:15:15.599 "raid_level": "raid5f", 00:15:15.599 "superblock": false, 00:15:15.599 "num_base_bdevs": 3, 00:15:15.599 "num_base_bdevs_discovered": 3, 00:15:15.599 "num_base_bdevs_operational": 3, 00:15:15.599 "base_bdevs_list": [ 00:15:15.599 { 00:15:15.599 "name": "spare", 00:15:15.599 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:15.599 "is_configured": true, 00:15:15.599 "data_offset": 0, 00:15:15.599 "data_size": 65536 00:15:15.599 }, 00:15:15.599 { 00:15:15.599 "name": "BaseBdev2", 00:15:15.599 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:15.599 "is_configured": true, 00:15:15.599 "data_offset": 0, 00:15:15.599 "data_size": 65536 00:15:15.599 }, 00:15:15.599 { 00:15:15.599 "name": "BaseBdev3", 00:15:15.599 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:15.599 "is_configured": true, 00:15:15.599 "data_offset": 0, 00:15:15.599 "data_size": 65536 00:15:15.600 } 00:15:15.600 ] 00:15:15.600 }' 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.600 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.867 "name": "raid_bdev1", 00:15:15.867 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:15.867 "strip_size_kb": 64, 00:15:15.867 "state": "online", 00:15:15.867 "raid_level": "raid5f", 00:15:15.867 "superblock": false, 00:15:15.867 "num_base_bdevs": 3, 00:15:15.867 "num_base_bdevs_discovered": 3, 00:15:15.867 "num_base_bdevs_operational": 3, 00:15:15.867 "base_bdevs_list": [ 00:15:15.867 { 00:15:15.867 "name": "spare", 00:15:15.867 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 }, 00:15:15.867 { 00:15:15.867 "name": "BaseBdev2", 00:15:15.867 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 }, 00:15:15.867 { 00:15:15.867 "name": "BaseBdev3", 00:15:15.867 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 } 00:15:15.867 ] 00:15:15.867 }' 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.867 "name": "raid_bdev1", 00:15:15.867 "uuid": "8d0781b9-edc5-4fd1-8d1a-3f18c1bd098a", 00:15:15.867 "strip_size_kb": 64, 00:15:15.867 "state": "online", 00:15:15.867 "raid_level": "raid5f", 00:15:15.867 "superblock": false, 00:15:15.867 "num_base_bdevs": 3, 00:15:15.867 "num_base_bdevs_discovered": 3, 00:15:15.867 "num_base_bdevs_operational": 3, 00:15:15.867 "base_bdevs_list": [ 00:15:15.867 { 00:15:15.867 "name": "spare", 00:15:15.867 "uuid": "d7a0b90d-91c3-5af8-8c24-7ff8fe468c0f", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 }, 00:15:15.867 { 00:15:15.867 "name": "BaseBdev2", 00:15:15.867 "uuid": "04088cfd-4fcf-5231-9eda-16c54f791a8e", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 }, 00:15:15.867 { 00:15:15.867 "name": "BaseBdev3", 00:15:15.867 "uuid": "7c95761e-af89-5fca-a386-15023808bd0a", 00:15:15.867 "is_configured": true, 00:15:15.867 "data_offset": 0, 00:15:15.867 "data_size": 65536 00:15:15.867 } 00:15:15.867 ] 00:15:15.867 }' 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.867 09:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.437 [2024-12-12 09:28:50.238619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.437 [2024-12-12 09:28:50.238651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.437 [2024-12-12 09:28:50.238734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.437 [2024-12-12 09:28:50.238811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.437 [2024-12-12 09:28:50.238847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.437 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:16.697 /dev/nbd0 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.697 1+0 records in 00:15:16.697 1+0 records out 00:15:16.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281632 s, 14.5 MB/s 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.697 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:16.957 /dev/nbd1 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.957 1+0 records in 00:15:16.957 1+0 records out 00:15:16.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487532 s, 8.4 MB/s 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.957 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.216 09:28:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.216 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82711 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82711 ']' 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82711 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82711 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82711' 00:15:17.476 killing process with pid 82711 00:15:17.476 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82711 00:15:17.476 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.476 00:15:17.476 Latency(us) 00:15:17.476 [2024-12-12T09:28:51.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.477 [2024-12-12T09:28:51.500Z] =================================================================================================================== 00:15:17.477 [2024-12-12T09:28:51.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.477 [2024-12-12 09:28:51.472052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.477 09:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82711 00:15:18.046 [2024-12-12 09:28:51.876286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.428 ************************************ 00:15:19.428 END TEST raid5f_rebuild_test 00:15:19.428 ************************************ 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:19.428 00:15:19.428 real 0m15.705s 00:15:19.428 user 0m19.048s 00:15:19.428 sys 0m2.512s 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.428 09:28:53 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:19.428 09:28:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:19.428 09:28:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.428 09:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.428 ************************************ 00:15:19.428 START TEST raid5f_rebuild_test_sb 00:15:19.428 ************************************ 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83157 00:15:19.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83157 00:15:19.428 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83157 ']' 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.429 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 [2024-12-12 09:28:53.220945] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:15:19.429 [2024-12-12 09:28:53.221203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83157 ] 00:15:19.429 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:19.429 Zero copy mechanism will not be used. 00:15:19.429 [2024-12-12 09:28:53.409236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.688 [2024-12-12 09:28:53.541740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.949 [2024-12-12 09:28:53.775388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.949 [2024-12-12 09:28:53.775483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 BaseBdev1_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 [2024-12-12 09:28:54.085839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.209 [2024-12-12 09:28:54.085914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.209 [2024-12-12 09:28:54.085939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.209 [2024-12-12 09:28:54.085950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.209 [2024-12-12 09:28:54.088360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.209 [2024-12-12 09:28:54.088508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.209 BaseBdev1 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 BaseBdev2_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 [2024-12-12 09:28:54.142062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.209 [2024-12-12 09:28:54.142185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.209 [2024-12-12 09:28:54.142210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.209 [2024-12-12 09:28:54.142223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.209 [2024-12-12 09:28:54.144648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.209 [2024-12-12 09:28:54.144687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.209 BaseBdev2 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 BaseBdev3_malloc 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 [2024-12-12 09:28:54.234471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:20.470 [2024-12-12 09:28:54.234592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.470 [2024-12-12 09:28:54.234631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.470 [2024-12-12 09:28:54.234661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.470 [2024-12-12 09:28:54.237113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.470 [2024-12-12 09:28:54.237215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:20.470 BaseBdev3 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 spare_malloc 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 spare_delay 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 [2024-12-12 09:28:54.307137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.470 [2024-12-12 09:28:54.307189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.470 [2024-12-12 09:28:54.307209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:20.470 [2024-12-12 09:28:54.307220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.470 [2024-12-12 09:28:54.309569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.470 [2024-12-12 09:28:54.309612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.470 spare 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 [2024-12-12 09:28:54.319191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.470 [2024-12-12 09:28:54.321160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.470 [2024-12-12 09:28:54.321221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.470 [2024-12-12 09:28:54.321425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:20.470 [2024-12-12 09:28:54.321443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.470 [2024-12-12 09:28:54.321686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:20.470 [2024-12-12 09:28:54.327442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:20.470 [2024-12-12 09:28:54.327514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:20.470 [2024-12-12 09:28:54.327760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.470 "name": "raid_bdev1", 00:15:20.470 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:20.470 "strip_size_kb": 64, 00:15:20.470 "state": "online", 00:15:20.470 "raid_level": "raid5f", 00:15:20.470 "superblock": true, 00:15:20.470 "num_base_bdevs": 3, 00:15:20.470 "num_base_bdevs_discovered": 3, 00:15:20.470 "num_base_bdevs_operational": 3, 00:15:20.470 "base_bdevs_list": [ 00:15:20.470 { 00:15:20.470 "name": "BaseBdev1", 00:15:20.470 "uuid": "f617986c-2e63-5ba1-9270-feffe9e169c1", 00:15:20.470 "is_configured": true, 00:15:20.470 "data_offset": 2048, 00:15:20.470 "data_size": 63488 00:15:20.470 }, 00:15:20.470 { 00:15:20.470 "name": "BaseBdev2", 00:15:20.470 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:20.470 "is_configured": true, 00:15:20.470 "data_offset": 2048, 00:15:20.470 "data_size": 63488 00:15:20.470 }, 00:15:20.470 { 00:15:20.470 "name": "BaseBdev3", 00:15:20.470 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:20.470 "is_configured": true, 00:15:20.470 "data_offset": 2048, 00:15:20.470 "data_size": 63488 00:15:20.470 } 00:15:20.470 ] 00:15:20.470 }' 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.470 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.040 [2024-12-12 09:28:54.774377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.040 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:21.040 [2024-12-12 09:28:55.041839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.300 /dev/nbd0 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.300 1+0 records in 00:15:21.300 1+0 records out 00:15:21.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382277 s, 10.7 MB/s 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:21.300 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:21.559 496+0 records in 00:15:21.559 496+0 records out 00:15:21.559 65011712 bytes (65 MB, 62 MiB) copied, 0.353908 s, 184 MB/s 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.559 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.819 [2024-12-12 09:28:55.678171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.819 [2024-12-12 09:28:55.685926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.819 "name": "raid_bdev1", 00:15:21.819 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:21.819 "strip_size_kb": 64, 00:15:21.819 "state": "online", 00:15:21.819 "raid_level": "raid5f", 00:15:21.819 "superblock": true, 00:15:21.819 "num_base_bdevs": 3, 00:15:21.819 "num_base_bdevs_discovered": 2, 00:15:21.819 "num_base_bdevs_operational": 2, 00:15:21.819 "base_bdevs_list": [ 00:15:21.819 { 00:15:21.819 "name": null, 00:15:21.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.819 "is_configured": false, 00:15:21.819 "data_offset": 0, 00:15:21.819 "data_size": 63488 00:15:21.819 }, 00:15:21.819 { 00:15:21.819 "name": "BaseBdev2", 00:15:21.819 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:21.819 "is_configured": true, 00:15:21.819 "data_offset": 2048, 00:15:21.819 "data_size": 63488 00:15:21.819 }, 00:15:21.819 { 00:15:21.819 "name": "BaseBdev3", 00:15:21.819 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:21.819 "is_configured": true, 00:15:21.819 "data_offset": 2048, 00:15:21.819 "data_size": 63488 00:15:21.819 } 00:15:21.819 ] 00:15:21.819 }' 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.819 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.389 09:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.389 09:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.389 09:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.389 [2024-12-12 09:28:56.109162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.389 [2024-12-12 09:28:56.126084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:22.389 09:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.389 09:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:22.389 [2024-12-12 09:28:56.133732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.329 "name": "raid_bdev1", 00:15:23.329 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:23.329 "strip_size_kb": 64, 00:15:23.329 "state": "online", 00:15:23.329 "raid_level": "raid5f", 00:15:23.329 "superblock": true, 00:15:23.329 "num_base_bdevs": 3, 00:15:23.329 "num_base_bdevs_discovered": 3, 00:15:23.329 "num_base_bdevs_operational": 3, 00:15:23.329 "process": { 00:15:23.329 "type": "rebuild", 00:15:23.329 "target": "spare", 00:15:23.329 "progress": { 00:15:23.329 "blocks": 20480, 00:15:23.329 "percent": 16 00:15:23.329 } 00:15:23.329 }, 00:15:23.329 "base_bdevs_list": [ 00:15:23.329 { 00:15:23.329 "name": "spare", 00:15:23.329 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:23.329 "is_configured": true, 00:15:23.329 "data_offset": 2048, 00:15:23.329 "data_size": 63488 00:15:23.329 }, 00:15:23.329 { 00:15:23.329 "name": "BaseBdev2", 00:15:23.329 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:23.329 "is_configured": true, 00:15:23.329 "data_offset": 2048, 00:15:23.329 "data_size": 63488 00:15:23.329 }, 00:15:23.329 { 00:15:23.329 "name": "BaseBdev3", 00:15:23.329 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:23.329 "is_configured": true, 00:15:23.329 "data_offset": 2048, 00:15:23.329 "data_size": 63488 00:15:23.329 } 00:15:23.329 ] 00:15:23.329 }' 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.329 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.329 [2024-12-12 09:28:57.277023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.329 [2024-12-12 09:28:57.342757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.329 [2024-12-12 09:28:57.342864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.329 [2024-12-12 09:28:57.342903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.329 [2024-12-12 09:28:57.342924] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.589 "name": "raid_bdev1", 00:15:23.589 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:23.589 "strip_size_kb": 64, 00:15:23.589 "state": "online", 00:15:23.589 "raid_level": "raid5f", 00:15:23.589 "superblock": true, 00:15:23.589 "num_base_bdevs": 3, 00:15:23.589 "num_base_bdevs_discovered": 2, 00:15:23.589 "num_base_bdevs_operational": 2, 00:15:23.589 "base_bdevs_list": [ 00:15:23.589 { 00:15:23.589 "name": null, 00:15:23.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.589 "is_configured": false, 00:15:23.589 "data_offset": 0, 00:15:23.589 "data_size": 63488 00:15:23.589 }, 00:15:23.589 { 00:15:23.589 "name": "BaseBdev2", 00:15:23.589 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:23.589 "is_configured": true, 00:15:23.589 "data_offset": 2048, 00:15:23.589 "data_size": 63488 00:15:23.589 }, 00:15:23.589 { 00:15:23.589 "name": "BaseBdev3", 00:15:23.589 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:23.589 "is_configured": true, 00:15:23.589 "data_offset": 2048, 00:15:23.589 "data_size": 63488 00:15:23.589 } 00:15:23.589 ] 00:15:23.589 }' 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.589 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.848 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.109 "name": "raid_bdev1", 00:15:24.109 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:24.109 "strip_size_kb": 64, 00:15:24.109 "state": "online", 00:15:24.109 "raid_level": "raid5f", 00:15:24.109 "superblock": true, 00:15:24.109 "num_base_bdevs": 3, 00:15:24.109 "num_base_bdevs_discovered": 2, 00:15:24.109 "num_base_bdevs_operational": 2, 00:15:24.109 "base_bdevs_list": [ 00:15:24.109 { 00:15:24.109 "name": null, 00:15:24.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.109 "is_configured": false, 00:15:24.109 "data_offset": 0, 00:15:24.109 "data_size": 63488 00:15:24.109 }, 00:15:24.109 { 00:15:24.109 "name": "BaseBdev2", 00:15:24.109 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:24.109 "is_configured": true, 00:15:24.109 "data_offset": 2048, 00:15:24.109 "data_size": 63488 00:15:24.109 }, 00:15:24.109 { 00:15:24.109 "name": "BaseBdev3", 00:15:24.109 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:24.109 "is_configured": true, 00:15:24.109 "data_offset": 2048, 00:15:24.109 "data_size": 63488 00:15:24.109 } 00:15:24.109 ] 00:15:24.109 }' 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.109 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.109 [2024-12-12 09:28:57.994213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.109 [2024-12-12 09:28:58.008608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:24.109 09:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.109 09:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.109 [2024-12-12 09:28:58.015878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.048 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.048 "name": "raid_bdev1", 00:15:25.048 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:25.048 "strip_size_kb": 64, 00:15:25.048 "state": "online", 00:15:25.048 "raid_level": "raid5f", 00:15:25.048 "superblock": true, 00:15:25.048 "num_base_bdevs": 3, 00:15:25.048 "num_base_bdevs_discovered": 3, 00:15:25.048 "num_base_bdevs_operational": 3, 00:15:25.048 "process": { 00:15:25.048 "type": "rebuild", 00:15:25.048 "target": "spare", 00:15:25.049 "progress": { 00:15:25.049 "blocks": 20480, 00:15:25.049 "percent": 16 00:15:25.049 } 00:15:25.049 }, 00:15:25.049 "base_bdevs_list": [ 00:15:25.049 { 00:15:25.049 "name": "spare", 00:15:25.049 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:25.049 "is_configured": true, 00:15:25.049 "data_offset": 2048, 00:15:25.049 "data_size": 63488 00:15:25.049 }, 00:15:25.049 { 00:15:25.049 "name": "BaseBdev2", 00:15:25.049 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:25.049 "is_configured": true, 00:15:25.049 "data_offset": 2048, 00:15:25.049 "data_size": 63488 00:15:25.049 }, 00:15:25.049 { 00:15:25.049 "name": "BaseBdev3", 00:15:25.049 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:25.049 "is_configured": true, 00:15:25.049 "data_offset": 2048, 00:15:25.049 "data_size": 63488 00:15:25.049 } 00:15:25.049 ] 00:15:25.049 }' 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:25.308 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.308 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.309 "name": "raid_bdev1", 00:15:25.309 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:25.309 "strip_size_kb": 64, 00:15:25.309 "state": "online", 00:15:25.309 "raid_level": "raid5f", 00:15:25.309 "superblock": true, 00:15:25.309 "num_base_bdevs": 3, 00:15:25.309 "num_base_bdevs_discovered": 3, 00:15:25.309 "num_base_bdevs_operational": 3, 00:15:25.309 "process": { 00:15:25.309 "type": "rebuild", 00:15:25.309 "target": "spare", 00:15:25.309 "progress": { 00:15:25.309 "blocks": 22528, 00:15:25.309 "percent": 17 00:15:25.309 } 00:15:25.309 }, 00:15:25.309 "base_bdevs_list": [ 00:15:25.309 { 00:15:25.309 "name": "spare", 00:15:25.309 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:25.309 "is_configured": true, 00:15:25.309 "data_offset": 2048, 00:15:25.309 "data_size": 63488 00:15:25.309 }, 00:15:25.309 { 00:15:25.309 "name": "BaseBdev2", 00:15:25.309 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:25.309 "is_configured": true, 00:15:25.309 "data_offset": 2048, 00:15:25.309 "data_size": 63488 00:15:25.309 }, 00:15:25.309 { 00:15:25.309 "name": "BaseBdev3", 00:15:25.309 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:25.309 "is_configured": true, 00:15:25.309 "data_offset": 2048, 00:15:25.309 "data_size": 63488 00:15:25.309 } 00:15:25.309 ] 00:15:25.309 }' 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.309 09:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.691 "name": "raid_bdev1", 00:15:26.691 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:26.691 "strip_size_kb": 64, 00:15:26.691 "state": "online", 00:15:26.691 "raid_level": "raid5f", 00:15:26.691 "superblock": true, 00:15:26.691 "num_base_bdevs": 3, 00:15:26.691 "num_base_bdevs_discovered": 3, 00:15:26.691 "num_base_bdevs_operational": 3, 00:15:26.691 "process": { 00:15:26.691 "type": "rebuild", 00:15:26.691 "target": "spare", 00:15:26.691 "progress": { 00:15:26.691 "blocks": 47104, 00:15:26.691 "percent": 37 00:15:26.691 } 00:15:26.691 }, 00:15:26.691 "base_bdevs_list": [ 00:15:26.691 { 00:15:26.691 "name": "spare", 00:15:26.691 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:26.691 "is_configured": true, 00:15:26.691 "data_offset": 2048, 00:15:26.691 "data_size": 63488 00:15:26.691 }, 00:15:26.691 { 00:15:26.691 "name": "BaseBdev2", 00:15:26.691 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:26.691 "is_configured": true, 00:15:26.691 "data_offset": 2048, 00:15:26.691 "data_size": 63488 00:15:26.691 }, 00:15:26.691 { 00:15:26.691 "name": "BaseBdev3", 00:15:26.691 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:26.691 "is_configured": true, 00:15:26.691 "data_offset": 2048, 00:15:26.691 "data_size": 63488 00:15:26.691 } 00:15:26.691 ] 00:15:26.691 }' 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.691 09:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.631 "name": "raid_bdev1", 00:15:27.631 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:27.631 "strip_size_kb": 64, 00:15:27.631 "state": "online", 00:15:27.631 "raid_level": "raid5f", 00:15:27.631 "superblock": true, 00:15:27.631 "num_base_bdevs": 3, 00:15:27.631 "num_base_bdevs_discovered": 3, 00:15:27.631 "num_base_bdevs_operational": 3, 00:15:27.631 "process": { 00:15:27.631 "type": "rebuild", 00:15:27.631 "target": "spare", 00:15:27.631 "progress": { 00:15:27.631 "blocks": 69632, 00:15:27.631 "percent": 54 00:15:27.631 } 00:15:27.631 }, 00:15:27.631 "base_bdevs_list": [ 00:15:27.631 { 00:15:27.631 "name": "spare", 00:15:27.631 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:27.631 "is_configured": true, 00:15:27.631 "data_offset": 2048, 00:15:27.631 "data_size": 63488 00:15:27.631 }, 00:15:27.631 { 00:15:27.631 "name": "BaseBdev2", 00:15:27.631 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:27.631 "is_configured": true, 00:15:27.631 "data_offset": 2048, 00:15:27.631 "data_size": 63488 00:15:27.631 }, 00:15:27.631 { 00:15:27.631 "name": "BaseBdev3", 00:15:27.631 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:27.631 "is_configured": true, 00:15:27.631 "data_offset": 2048, 00:15:27.631 "data_size": 63488 00:15:27.631 } 00:15:27.631 ] 00:15:27.631 }' 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.631 09:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.013 "name": "raid_bdev1", 00:15:29.013 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:29.013 "strip_size_kb": 64, 00:15:29.013 "state": "online", 00:15:29.013 "raid_level": "raid5f", 00:15:29.013 "superblock": true, 00:15:29.013 "num_base_bdevs": 3, 00:15:29.013 "num_base_bdevs_discovered": 3, 00:15:29.013 "num_base_bdevs_operational": 3, 00:15:29.013 "process": { 00:15:29.013 "type": "rebuild", 00:15:29.013 "target": "spare", 00:15:29.013 "progress": { 00:15:29.013 "blocks": 94208, 00:15:29.013 "percent": 74 00:15:29.013 } 00:15:29.013 }, 00:15:29.013 "base_bdevs_list": [ 00:15:29.013 { 00:15:29.013 "name": "spare", 00:15:29.013 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:29.013 "is_configured": true, 00:15:29.013 "data_offset": 2048, 00:15:29.013 "data_size": 63488 00:15:29.013 }, 00:15:29.013 { 00:15:29.013 "name": "BaseBdev2", 00:15:29.013 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:29.013 "is_configured": true, 00:15:29.013 "data_offset": 2048, 00:15:29.013 "data_size": 63488 00:15:29.013 }, 00:15:29.013 { 00:15:29.013 "name": "BaseBdev3", 00:15:29.013 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:29.013 "is_configured": true, 00:15:29.013 "data_offset": 2048, 00:15:29.013 "data_size": 63488 00:15:29.013 } 00:15:29.013 ] 00:15:29.013 }' 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.013 09:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.953 "name": "raid_bdev1", 00:15:29.953 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:29.953 "strip_size_kb": 64, 00:15:29.953 "state": "online", 00:15:29.953 "raid_level": "raid5f", 00:15:29.953 "superblock": true, 00:15:29.953 "num_base_bdevs": 3, 00:15:29.953 "num_base_bdevs_discovered": 3, 00:15:29.953 "num_base_bdevs_operational": 3, 00:15:29.953 "process": { 00:15:29.953 "type": "rebuild", 00:15:29.953 "target": "spare", 00:15:29.953 "progress": { 00:15:29.953 "blocks": 116736, 00:15:29.953 "percent": 91 00:15:29.953 } 00:15:29.953 }, 00:15:29.953 "base_bdevs_list": [ 00:15:29.953 { 00:15:29.953 "name": "spare", 00:15:29.953 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:29.953 "is_configured": true, 00:15:29.953 "data_offset": 2048, 00:15:29.953 "data_size": 63488 00:15:29.953 }, 00:15:29.953 { 00:15:29.953 "name": "BaseBdev2", 00:15:29.953 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:29.953 "is_configured": true, 00:15:29.953 "data_offset": 2048, 00:15:29.953 "data_size": 63488 00:15:29.953 }, 00:15:29.953 { 00:15:29.953 "name": "BaseBdev3", 00:15:29.953 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:29.953 "is_configured": true, 00:15:29.953 "data_offset": 2048, 00:15:29.953 "data_size": 63488 00:15:29.953 } 00:15:29.953 ] 00:15:29.953 }' 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.953 09:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.523 [2024-12-12 09:29:04.257170] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:30.523 [2024-12-12 09:29:04.257303] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:30.523 [2024-12-12 09:29:04.257446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.092 "name": "raid_bdev1", 00:15:31.092 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:31.092 "strip_size_kb": 64, 00:15:31.092 "state": "online", 00:15:31.092 "raid_level": "raid5f", 00:15:31.092 "superblock": true, 00:15:31.092 "num_base_bdevs": 3, 00:15:31.092 "num_base_bdevs_discovered": 3, 00:15:31.092 "num_base_bdevs_operational": 3, 00:15:31.092 "base_bdevs_list": [ 00:15:31.092 { 00:15:31.092 "name": "spare", 00:15:31.092 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:31.092 "is_configured": true, 00:15:31.092 "data_offset": 2048, 00:15:31.092 "data_size": 63488 00:15:31.092 }, 00:15:31.092 { 00:15:31.092 "name": "BaseBdev2", 00:15:31.092 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:31.092 "is_configured": true, 00:15:31.092 "data_offset": 2048, 00:15:31.092 "data_size": 63488 00:15:31.092 }, 00:15:31.092 { 00:15:31.092 "name": "BaseBdev3", 00:15:31.092 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:31.092 "is_configured": true, 00:15:31.092 "data_offset": 2048, 00:15:31.092 "data_size": 63488 00:15:31.092 } 00:15:31.092 ] 00:15:31.092 }' 00:15:31.092 09:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.092 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.351 "name": "raid_bdev1", 00:15:31.351 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:31.351 "strip_size_kb": 64, 00:15:31.351 "state": "online", 00:15:31.351 "raid_level": "raid5f", 00:15:31.351 "superblock": true, 00:15:31.351 "num_base_bdevs": 3, 00:15:31.351 "num_base_bdevs_discovered": 3, 00:15:31.351 "num_base_bdevs_operational": 3, 00:15:31.351 "base_bdevs_list": [ 00:15:31.351 { 00:15:31.351 "name": "spare", 00:15:31.351 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:31.351 "is_configured": true, 00:15:31.351 "data_offset": 2048, 00:15:31.351 "data_size": 63488 00:15:31.351 }, 00:15:31.351 { 00:15:31.351 "name": "BaseBdev2", 00:15:31.351 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:31.351 "is_configured": true, 00:15:31.351 "data_offset": 2048, 00:15:31.351 "data_size": 63488 00:15:31.351 }, 00:15:31.351 { 00:15:31.351 "name": "BaseBdev3", 00:15:31.351 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:31.351 "is_configured": true, 00:15:31.351 "data_offset": 2048, 00:15:31.351 "data_size": 63488 00:15:31.351 } 00:15:31.351 ] 00:15:31.351 }' 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.351 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.352 "name": "raid_bdev1", 00:15:31.352 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:31.352 "strip_size_kb": 64, 00:15:31.352 "state": "online", 00:15:31.352 "raid_level": "raid5f", 00:15:31.352 "superblock": true, 00:15:31.352 "num_base_bdevs": 3, 00:15:31.352 "num_base_bdevs_discovered": 3, 00:15:31.352 "num_base_bdevs_operational": 3, 00:15:31.352 "base_bdevs_list": [ 00:15:31.352 { 00:15:31.352 "name": "spare", 00:15:31.352 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:31.352 "is_configured": true, 00:15:31.352 "data_offset": 2048, 00:15:31.352 "data_size": 63488 00:15:31.352 }, 00:15:31.352 { 00:15:31.352 "name": "BaseBdev2", 00:15:31.352 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:31.352 "is_configured": true, 00:15:31.352 "data_offset": 2048, 00:15:31.352 "data_size": 63488 00:15:31.352 }, 00:15:31.352 { 00:15:31.352 "name": "BaseBdev3", 00:15:31.352 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:31.352 "is_configured": true, 00:15:31.352 "data_offset": 2048, 00:15:31.352 "data_size": 63488 00:15:31.352 } 00:15:31.352 ] 00:15:31.352 }' 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.352 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.920 [2024-12-12 09:29:05.664000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.920 [2024-12-12 09:29:05.664075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.920 [2024-12-12 09:29:05.664191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.920 [2024-12-12 09:29:05.664283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.920 [2024-12-12 09:29:05.664338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:31.920 /dev/nbd0 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.920 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.180 1+0 records in 00:15:32.180 1+0 records out 00:15:32.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373025 s, 11.0 MB/s 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.180 09:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:32.180 /dev/nbd1 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.180 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.439 1+0 records in 00:15:32.439 1+0 records out 00:15:32.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217337 s, 18.8 MB/s 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.439 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.699 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:32.958 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:32.958 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:32.958 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:32.958 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 [2024-12-12 09:29:06.866592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.959 [2024-12-12 09:29:06.866652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.959 [2024-12-12 09:29:06.866673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:32.959 [2024-12-12 09:29:06.866683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.959 [2024-12-12 09:29:06.869062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.959 [2024-12-12 09:29:06.869102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.959 [2024-12-12 09:29:06.869178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:32.959 [2024-12-12 09:29:06.869239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.959 [2024-12-12 09:29:06.869372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.959 [2024-12-12 09:29:06.869473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.959 spare 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.959 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 [2024-12-12 09:29:06.969356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:32.959 [2024-12-12 09:29:06.969384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:32.959 [2024-12-12 09:29:06.969643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:32.959 [2024-12-12 09:29:06.974660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:32.959 [2024-12-12 09:29:06.974681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:32.959 [2024-12-12 09:29:06.974853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.219 09:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.219 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.219 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.219 "name": "raid_bdev1", 00:15:33.219 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:33.219 "strip_size_kb": 64, 00:15:33.219 "state": "online", 00:15:33.219 "raid_level": "raid5f", 00:15:33.219 "superblock": true, 00:15:33.219 "num_base_bdevs": 3, 00:15:33.219 "num_base_bdevs_discovered": 3, 00:15:33.219 "num_base_bdevs_operational": 3, 00:15:33.219 "base_bdevs_list": [ 00:15:33.219 { 00:15:33.219 "name": "spare", 00:15:33.219 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:33.219 "is_configured": true, 00:15:33.219 "data_offset": 2048, 00:15:33.219 "data_size": 63488 00:15:33.219 }, 00:15:33.219 { 00:15:33.219 "name": "BaseBdev2", 00:15:33.219 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:33.219 "is_configured": true, 00:15:33.219 "data_offset": 2048, 00:15:33.219 "data_size": 63488 00:15:33.219 }, 00:15:33.219 { 00:15:33.219 "name": "BaseBdev3", 00:15:33.219 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:33.219 "is_configured": true, 00:15:33.219 "data_offset": 2048, 00:15:33.219 "data_size": 63488 00:15:33.219 } 00:15:33.219 ] 00:15:33.219 }' 00:15:33.219 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.219 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.479 "name": "raid_bdev1", 00:15:33.479 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:33.479 "strip_size_kb": 64, 00:15:33.479 "state": "online", 00:15:33.479 "raid_level": "raid5f", 00:15:33.479 "superblock": true, 00:15:33.479 "num_base_bdevs": 3, 00:15:33.479 "num_base_bdevs_discovered": 3, 00:15:33.479 "num_base_bdevs_operational": 3, 00:15:33.479 "base_bdevs_list": [ 00:15:33.479 { 00:15:33.479 "name": "spare", 00:15:33.479 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:33.479 "is_configured": true, 00:15:33.479 "data_offset": 2048, 00:15:33.479 "data_size": 63488 00:15:33.479 }, 00:15:33.479 { 00:15:33.479 "name": "BaseBdev2", 00:15:33.479 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:33.479 "is_configured": true, 00:15:33.479 "data_offset": 2048, 00:15:33.479 "data_size": 63488 00:15:33.479 }, 00:15:33.479 { 00:15:33.479 "name": "BaseBdev3", 00:15:33.479 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:33.479 "is_configured": true, 00:15:33.479 "data_offset": 2048, 00:15:33.479 "data_size": 63488 00:15:33.479 } 00:15:33.479 ] 00:15:33.479 }' 00:15:33.479 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 [2024-12-12 09:29:07.624565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.738 "name": "raid_bdev1", 00:15:33.738 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:33.738 "strip_size_kb": 64, 00:15:33.738 "state": "online", 00:15:33.738 "raid_level": "raid5f", 00:15:33.738 "superblock": true, 00:15:33.738 "num_base_bdevs": 3, 00:15:33.738 "num_base_bdevs_discovered": 2, 00:15:33.738 "num_base_bdevs_operational": 2, 00:15:33.738 "base_bdevs_list": [ 00:15:33.738 { 00:15:33.738 "name": null, 00:15:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.738 "is_configured": false, 00:15:33.738 "data_offset": 0, 00:15:33.738 "data_size": 63488 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev2", 00:15:33.738 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 2048, 00:15:33.738 "data_size": 63488 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev3", 00:15:33.738 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 2048, 00:15:33.738 "data_size": 63488 00:15:33.738 } 00:15:33.738 ] 00:15:33.738 }' 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.738 09:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.310 09:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.310 09:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.310 09:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.310 [2024-12-12 09:29:08.079808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.310 [2024-12-12 09:29:08.080012] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.310 [2024-12-12 09:29:08.080082] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:34.310 [2024-12-12 09:29:08.080139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.310 [2024-12-12 09:29:08.095746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:34.310 09:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.310 09:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:34.310 [2024-12-12 09:29:08.102889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.250 "name": "raid_bdev1", 00:15:35.250 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:35.250 "strip_size_kb": 64, 00:15:35.250 "state": "online", 00:15:35.250 "raid_level": "raid5f", 00:15:35.250 "superblock": true, 00:15:35.250 "num_base_bdevs": 3, 00:15:35.250 "num_base_bdevs_discovered": 3, 00:15:35.250 "num_base_bdevs_operational": 3, 00:15:35.250 "process": { 00:15:35.250 "type": "rebuild", 00:15:35.250 "target": "spare", 00:15:35.250 "progress": { 00:15:35.250 "blocks": 20480, 00:15:35.250 "percent": 16 00:15:35.250 } 00:15:35.250 }, 00:15:35.250 "base_bdevs_list": [ 00:15:35.250 { 00:15:35.250 "name": "spare", 00:15:35.250 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:35.250 "is_configured": true, 00:15:35.250 "data_offset": 2048, 00:15:35.250 "data_size": 63488 00:15:35.250 }, 00:15:35.250 { 00:15:35.250 "name": "BaseBdev2", 00:15:35.250 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:35.250 "is_configured": true, 00:15:35.250 "data_offset": 2048, 00:15:35.250 "data_size": 63488 00:15:35.250 }, 00:15:35.250 { 00:15:35.250 "name": "BaseBdev3", 00:15:35.250 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:35.250 "is_configured": true, 00:15:35.250 "data_offset": 2048, 00:15:35.250 "data_size": 63488 00:15:35.250 } 00:15:35.250 ] 00:15:35.250 }' 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.250 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.250 [2024-12-12 09:29:09.253869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.510 [2024-12-12 09:29:09.311626] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.510 [2024-12-12 09:29:09.311690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.510 [2024-12-12 09:29:09.311706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.510 [2024-12-12 09:29:09.311716] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.510 "name": "raid_bdev1", 00:15:35.510 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:35.510 "strip_size_kb": 64, 00:15:35.510 "state": "online", 00:15:35.510 "raid_level": "raid5f", 00:15:35.510 "superblock": true, 00:15:35.510 "num_base_bdevs": 3, 00:15:35.510 "num_base_bdevs_discovered": 2, 00:15:35.510 "num_base_bdevs_operational": 2, 00:15:35.510 "base_bdevs_list": [ 00:15:35.510 { 00:15:35.510 "name": null, 00:15:35.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.510 "is_configured": false, 00:15:35.510 "data_offset": 0, 00:15:35.510 "data_size": 63488 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "name": "BaseBdev2", 00:15:35.510 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:35.510 "is_configured": true, 00:15:35.510 "data_offset": 2048, 00:15:35.510 "data_size": 63488 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "name": "BaseBdev3", 00:15:35.510 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:35.510 "is_configured": true, 00:15:35.510 "data_offset": 2048, 00:15:35.510 "data_size": 63488 00:15:35.510 } 00:15:35.510 ] 00:15:35.510 }' 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.510 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.080 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.080 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.080 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.080 [2024-12-12 09:29:09.807771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.080 [2024-12-12 09:29:09.807890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.080 [2024-12-12 09:29:09.807927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:36.080 [2024-12-12 09:29:09.807972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.080 [2024-12-12 09:29:09.808493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.080 [2024-12-12 09:29:09.808558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.080 [2024-12-12 09:29:09.808671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.080 [2024-12-12 09:29:09.808719] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.080 [2024-12-12 09:29:09.808773] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.080 [2024-12-12 09:29:09.808834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.080 [2024-12-12 09:29:09.822646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:36.080 spare 00:15:36.080 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.080 09:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:36.080 [2024-12-12 09:29:09.829690] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.020 "name": "raid_bdev1", 00:15:37.020 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:37.020 "strip_size_kb": 64, 00:15:37.020 "state": "online", 00:15:37.020 "raid_level": "raid5f", 00:15:37.020 "superblock": true, 00:15:37.020 "num_base_bdevs": 3, 00:15:37.020 "num_base_bdevs_discovered": 3, 00:15:37.020 "num_base_bdevs_operational": 3, 00:15:37.020 "process": { 00:15:37.020 "type": "rebuild", 00:15:37.020 "target": "spare", 00:15:37.020 "progress": { 00:15:37.020 "blocks": 20480, 00:15:37.020 "percent": 16 00:15:37.020 } 00:15:37.020 }, 00:15:37.020 "base_bdevs_list": [ 00:15:37.020 { 00:15:37.020 "name": "spare", 00:15:37.020 "uuid": "8902c676-ffa8-5e96-bde3-e45fbfcdc10d", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 }, 00:15:37.020 { 00:15:37.020 "name": "BaseBdev2", 00:15:37.020 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 }, 00:15:37.020 { 00:15:37.020 "name": "BaseBdev3", 00:15:37.020 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 } 00:15:37.020 ] 00:15:37.020 }' 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.020 09:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 [2024-12-12 09:29:10.960816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.020 [2024-12-12 09:29:11.038534] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.020 [2024-12-12 09:29:11.038581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.020 [2024-12-12 09:29:11.038599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.020 [2024-12-12 09:29:11.038606] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.281 "name": "raid_bdev1", 00:15:37.281 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:37.281 "strip_size_kb": 64, 00:15:37.281 "state": "online", 00:15:37.281 "raid_level": "raid5f", 00:15:37.281 "superblock": true, 00:15:37.281 "num_base_bdevs": 3, 00:15:37.281 "num_base_bdevs_discovered": 2, 00:15:37.281 "num_base_bdevs_operational": 2, 00:15:37.281 "base_bdevs_list": [ 00:15:37.281 { 00:15:37.281 "name": null, 00:15:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.281 "is_configured": false, 00:15:37.281 "data_offset": 0, 00:15:37.281 "data_size": 63488 00:15:37.281 }, 00:15:37.281 { 00:15:37.281 "name": "BaseBdev2", 00:15:37.281 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:37.281 "is_configured": true, 00:15:37.281 "data_offset": 2048, 00:15:37.281 "data_size": 63488 00:15:37.281 }, 00:15:37.281 { 00:15:37.281 "name": "BaseBdev3", 00:15:37.281 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:37.281 "is_configured": true, 00:15:37.281 "data_offset": 2048, 00:15:37.281 "data_size": 63488 00:15:37.281 } 00:15:37.281 ] 00:15:37.281 }' 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.281 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.541 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.800 "name": "raid_bdev1", 00:15:37.800 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:37.800 "strip_size_kb": 64, 00:15:37.800 "state": "online", 00:15:37.800 "raid_level": "raid5f", 00:15:37.800 "superblock": true, 00:15:37.800 "num_base_bdevs": 3, 00:15:37.800 "num_base_bdevs_discovered": 2, 00:15:37.800 "num_base_bdevs_operational": 2, 00:15:37.800 "base_bdevs_list": [ 00:15:37.800 { 00:15:37.800 "name": null, 00:15:37.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.800 "is_configured": false, 00:15:37.800 "data_offset": 0, 00:15:37.800 "data_size": 63488 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "name": "BaseBdev2", 00:15:37.800 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:37.800 "is_configured": true, 00:15:37.800 "data_offset": 2048, 00:15:37.800 "data_size": 63488 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "name": "BaseBdev3", 00:15:37.800 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:37.800 "is_configured": true, 00:15:37.800 "data_offset": 2048, 00:15:37.800 "data_size": 63488 00:15:37.800 } 00:15:37.800 ] 00:15:37.800 }' 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.800 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.800 [2024-12-12 09:29:11.697532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:37.800 [2024-12-12 09:29:11.697627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.800 [2024-12-12 09:29:11.697675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:37.800 [2024-12-12 09:29:11.697702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.800 [2024-12-12 09:29:11.698218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.800 [2024-12-12 09:29:11.698284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.800 [2024-12-12 09:29:11.698390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:37.801 [2024-12-12 09:29:11.698435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.801 [2024-12-12 09:29:11.698489] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:37.801 [2024-12-12 09:29:11.698523] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:37.801 BaseBdev1 00:15:37.801 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.801 09:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.740 "name": "raid_bdev1", 00:15:38.740 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:38.740 "strip_size_kb": 64, 00:15:38.740 "state": "online", 00:15:38.740 "raid_level": "raid5f", 00:15:38.740 "superblock": true, 00:15:38.740 "num_base_bdevs": 3, 00:15:38.740 "num_base_bdevs_discovered": 2, 00:15:38.740 "num_base_bdevs_operational": 2, 00:15:38.740 "base_bdevs_list": [ 00:15:38.740 { 00:15:38.740 "name": null, 00:15:38.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.740 "is_configured": false, 00:15:38.740 "data_offset": 0, 00:15:38.740 "data_size": 63488 00:15:38.740 }, 00:15:38.740 { 00:15:38.740 "name": "BaseBdev2", 00:15:38.740 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:38.740 "is_configured": true, 00:15:38.740 "data_offset": 2048, 00:15:38.740 "data_size": 63488 00:15:38.740 }, 00:15:38.740 { 00:15:38.740 "name": "BaseBdev3", 00:15:38.740 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:38.740 "is_configured": true, 00:15:38.740 "data_offset": 2048, 00:15:38.740 "data_size": 63488 00:15:38.740 } 00:15:38.740 ] 00:15:38.740 }' 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.740 09:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.311 "name": "raid_bdev1", 00:15:39.311 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:39.311 "strip_size_kb": 64, 00:15:39.311 "state": "online", 00:15:39.311 "raid_level": "raid5f", 00:15:39.311 "superblock": true, 00:15:39.311 "num_base_bdevs": 3, 00:15:39.311 "num_base_bdevs_discovered": 2, 00:15:39.311 "num_base_bdevs_operational": 2, 00:15:39.311 "base_bdevs_list": [ 00:15:39.311 { 00:15:39.311 "name": null, 00:15:39.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.311 "is_configured": false, 00:15:39.311 "data_offset": 0, 00:15:39.311 "data_size": 63488 00:15:39.311 }, 00:15:39.311 { 00:15:39.311 "name": "BaseBdev2", 00:15:39.311 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:39.311 "is_configured": true, 00:15:39.311 "data_offset": 2048, 00:15:39.311 "data_size": 63488 00:15:39.311 }, 00:15:39.311 { 00:15:39.311 "name": "BaseBdev3", 00:15:39.311 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:39.311 "is_configured": true, 00:15:39.311 "data_offset": 2048, 00:15:39.311 "data_size": 63488 00:15:39.311 } 00:15:39.311 ] 00:15:39.311 }' 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.311 [2024-12-12 09:29:13.274950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.311 [2024-12-12 09:29:13.275115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.311 [2024-12-12 09:29:13.275187] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.311 request: 00:15:39.311 { 00:15:39.311 "base_bdev": "BaseBdev1", 00:15:39.311 "raid_bdev": "raid_bdev1", 00:15:39.311 "method": "bdev_raid_add_base_bdev", 00:15:39.311 "req_id": 1 00:15:39.311 } 00:15:39.311 Got JSON-RPC error response 00:15:39.311 response: 00:15:39.311 { 00:15:39.311 "code": -22, 00:15:39.311 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:39.311 } 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.311 09:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.693 "name": "raid_bdev1", 00:15:40.693 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:40.693 "strip_size_kb": 64, 00:15:40.693 "state": "online", 00:15:40.693 "raid_level": "raid5f", 00:15:40.693 "superblock": true, 00:15:40.693 "num_base_bdevs": 3, 00:15:40.693 "num_base_bdevs_discovered": 2, 00:15:40.693 "num_base_bdevs_operational": 2, 00:15:40.693 "base_bdevs_list": [ 00:15:40.693 { 00:15:40.693 "name": null, 00:15:40.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.693 "is_configured": false, 00:15:40.693 "data_offset": 0, 00:15:40.693 "data_size": 63488 00:15:40.693 }, 00:15:40.693 { 00:15:40.693 "name": "BaseBdev2", 00:15:40.693 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:40.693 "is_configured": true, 00:15:40.693 "data_offset": 2048, 00:15:40.693 "data_size": 63488 00:15:40.693 }, 00:15:40.693 { 00:15:40.693 "name": "BaseBdev3", 00:15:40.693 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:40.693 "is_configured": true, 00:15:40.693 "data_offset": 2048, 00:15:40.693 "data_size": 63488 00:15:40.693 } 00:15:40.693 ] 00:15:40.693 }' 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.693 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.954 "name": "raid_bdev1", 00:15:40.954 "uuid": "90bdf62c-73fe-41b4-b814-08e069f51703", 00:15:40.954 "strip_size_kb": 64, 00:15:40.954 "state": "online", 00:15:40.954 "raid_level": "raid5f", 00:15:40.954 "superblock": true, 00:15:40.954 "num_base_bdevs": 3, 00:15:40.954 "num_base_bdevs_discovered": 2, 00:15:40.954 "num_base_bdevs_operational": 2, 00:15:40.954 "base_bdevs_list": [ 00:15:40.954 { 00:15:40.954 "name": null, 00:15:40.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.954 "is_configured": false, 00:15:40.954 "data_offset": 0, 00:15:40.954 "data_size": 63488 00:15:40.954 }, 00:15:40.954 { 00:15:40.954 "name": "BaseBdev2", 00:15:40.954 "uuid": "60263f06-4c2b-511d-9d17-23b07d7b2b0c", 00:15:40.954 "is_configured": true, 00:15:40.954 "data_offset": 2048, 00:15:40.954 "data_size": 63488 00:15:40.954 }, 00:15:40.954 { 00:15:40.954 "name": "BaseBdev3", 00:15:40.954 "uuid": "93e751b1-7e38-5e89-84db-82b9f121a4be", 00:15:40.954 "is_configured": true, 00:15:40.954 "data_offset": 2048, 00:15:40.954 "data_size": 63488 00:15:40.954 } 00:15:40.954 ] 00:15:40.954 }' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83157 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83157 ']' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 83157 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83157 00:15:40.954 killing process with pid 83157 00:15:40.954 Received shutdown signal, test time was about 60.000000 seconds 00:15:40.954 00:15:40.954 Latency(us) 00:15:40.954 [2024-12-12T09:29:14.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.954 [2024-12-12T09:29:14.977Z] =================================================================================================================== 00:15:40.954 [2024-12-12T09:29:14.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83157' 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 83157 00:15:40.954 [2024-12-12 09:29:14.923165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.954 [2024-12-12 09:29:14.923250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.954 [2024-12-12 09:29:14.923299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.954 [2024-12-12 09:29:14.923324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:40.954 09:29:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 83157 00:15:41.525 [2024-12-12 09:29:15.323131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.905 09:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:42.905 ************************************ 00:15:42.905 END TEST raid5f_rebuild_test_sb 00:15:42.905 ************************************ 00:15:42.905 00:15:42.905 real 0m23.383s 00:15:42.905 user 0m29.802s 00:15:42.905 sys 0m2.884s 00:15:42.905 09:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.905 09:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.905 09:29:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:42.905 09:29:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:42.905 09:29:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.905 09:29:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.905 09:29:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.905 ************************************ 00:15:42.905 START TEST raid5f_state_function_test 00:15:42.905 ************************************ 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83904 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83904' 00:15:42.905 Process raid pid: 83904 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83904 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83904 ']' 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.905 09:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.905 [2024-12-12 09:29:16.681825] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:15:42.905 [2024-12-12 09:29:16.681981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.905 [2024-12-12 09:29:16.864081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.166 [2024-12-12 09:29:16.992246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.425 [2024-12-12 09:29:17.209139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.425 [2024-12-12 09:29:17.209180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.685 [2024-12-12 09:29:17.505581] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.685 [2024-12-12 09:29:17.505641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.685 [2024-12-12 09:29:17.505650] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.685 [2024-12-12 09:29:17.505660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.685 [2024-12-12 09:29:17.505666] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.685 [2024-12-12 09:29:17.505675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.685 [2024-12-12 09:29:17.505680] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.685 [2024-12-12 09:29:17.505690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.685 "name": "Existed_Raid", 00:15:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.685 "strip_size_kb": 64, 00:15:43.685 "state": "configuring", 00:15:43.685 "raid_level": "raid5f", 00:15:43.685 "superblock": false, 00:15:43.685 "num_base_bdevs": 4, 00:15:43.685 "num_base_bdevs_discovered": 0, 00:15:43.685 "num_base_bdevs_operational": 4, 00:15:43.685 "base_bdevs_list": [ 00:15:43.685 { 00:15:43.685 "name": "BaseBdev1", 00:15:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.685 "is_configured": false, 00:15:43.685 "data_offset": 0, 00:15:43.685 "data_size": 0 00:15:43.685 }, 00:15:43.685 { 00:15:43.685 "name": "BaseBdev2", 00:15:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.685 "is_configured": false, 00:15:43.685 "data_offset": 0, 00:15:43.685 "data_size": 0 00:15:43.685 }, 00:15:43.685 { 00:15:43.685 "name": "BaseBdev3", 00:15:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.685 "is_configured": false, 00:15:43.685 "data_offset": 0, 00:15:43.685 "data_size": 0 00:15:43.685 }, 00:15:43.685 { 00:15:43.685 "name": "BaseBdev4", 00:15:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.685 "is_configured": false, 00:15:43.685 "data_offset": 0, 00:15:43.685 "data_size": 0 00:15:43.685 } 00:15:43.685 ] 00:15:43.685 }' 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.685 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.254 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.254 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 [2024-12-12 09:29:17.980694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.255 [2024-12-12 09:29:17.980798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 [2024-12-12 09:29:17.992683] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.255 [2024-12-12 09:29:17.992763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.255 [2024-12-12 09:29:17.992788] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.255 [2024-12-12 09:29:17.992809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.255 [2024-12-12 09:29:17.992825] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.255 [2024-12-12 09:29:17.992846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.255 [2024-12-12 09:29:17.992862] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:44.255 [2024-12-12 09:29:17.992898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 [2024-12-12 09:29:18.045904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.255 BaseBdev1 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 [ 00:15:44.255 { 00:15:44.255 "name": "BaseBdev1", 00:15:44.255 "aliases": [ 00:15:44.255 "f29a2178-b985-4f18-99e6-f48b7ac04685" 00:15:44.255 ], 00:15:44.255 "product_name": "Malloc disk", 00:15:44.255 "block_size": 512, 00:15:44.255 "num_blocks": 65536, 00:15:44.255 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:44.255 "assigned_rate_limits": { 00:15:44.255 "rw_ios_per_sec": 0, 00:15:44.255 "rw_mbytes_per_sec": 0, 00:15:44.255 "r_mbytes_per_sec": 0, 00:15:44.255 "w_mbytes_per_sec": 0 00:15:44.255 }, 00:15:44.255 "claimed": true, 00:15:44.255 "claim_type": "exclusive_write", 00:15:44.255 "zoned": false, 00:15:44.255 "supported_io_types": { 00:15:44.255 "read": true, 00:15:44.255 "write": true, 00:15:44.255 "unmap": true, 00:15:44.255 "flush": true, 00:15:44.255 "reset": true, 00:15:44.255 "nvme_admin": false, 00:15:44.255 "nvme_io": false, 00:15:44.255 "nvme_io_md": false, 00:15:44.255 "write_zeroes": true, 00:15:44.255 "zcopy": true, 00:15:44.255 "get_zone_info": false, 00:15:44.255 "zone_management": false, 00:15:44.255 "zone_append": false, 00:15:44.255 "compare": false, 00:15:44.255 "compare_and_write": false, 00:15:44.255 "abort": true, 00:15:44.255 "seek_hole": false, 00:15:44.255 "seek_data": false, 00:15:44.255 "copy": true, 00:15:44.255 "nvme_iov_md": false 00:15:44.255 }, 00:15:44.255 "memory_domains": [ 00:15:44.255 { 00:15:44.255 "dma_device_id": "system", 00:15:44.255 "dma_device_type": 1 00:15:44.255 }, 00:15:44.255 { 00:15:44.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.255 "dma_device_type": 2 00:15:44.255 } 00:15:44.255 ], 00:15:44.255 "driver_specific": {} 00:15:44.255 } 00:15:44.255 ] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.255 "name": "Existed_Raid", 00:15:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.255 "strip_size_kb": 64, 00:15:44.255 "state": "configuring", 00:15:44.255 "raid_level": "raid5f", 00:15:44.255 "superblock": false, 00:15:44.255 "num_base_bdevs": 4, 00:15:44.255 "num_base_bdevs_discovered": 1, 00:15:44.255 "num_base_bdevs_operational": 4, 00:15:44.255 "base_bdevs_list": [ 00:15:44.255 { 00:15:44.255 "name": "BaseBdev1", 00:15:44.255 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:44.255 "is_configured": true, 00:15:44.255 "data_offset": 0, 00:15:44.255 "data_size": 65536 00:15:44.255 }, 00:15:44.255 { 00:15:44.255 "name": "BaseBdev2", 00:15:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.255 "is_configured": false, 00:15:44.255 "data_offset": 0, 00:15:44.255 "data_size": 0 00:15:44.255 }, 00:15:44.255 { 00:15:44.255 "name": "BaseBdev3", 00:15:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.255 "is_configured": false, 00:15:44.255 "data_offset": 0, 00:15:44.255 "data_size": 0 00:15:44.255 }, 00:15:44.255 { 00:15:44.255 "name": "BaseBdev4", 00:15:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.255 "is_configured": false, 00:15:44.255 "data_offset": 0, 00:15:44.255 "data_size": 0 00:15:44.255 } 00:15:44.255 ] 00:15:44.255 }' 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.255 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.514 [2024-12-12 09:29:18.525062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.514 [2024-12-12 09:29:18.525101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.514 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.773 [2024-12-12 09:29:18.537114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.773 [2024-12-12 09:29:18.539216] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.773 [2024-12-12 09:29:18.539289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.773 [2024-12-12 09:29:18.539317] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.773 [2024-12-12 09:29:18.539340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.773 [2024-12-12 09:29:18.539357] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:44.773 [2024-12-12 09:29:18.539377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.773 "name": "Existed_Raid", 00:15:44.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.773 "strip_size_kb": 64, 00:15:44.773 "state": "configuring", 00:15:44.773 "raid_level": "raid5f", 00:15:44.773 "superblock": false, 00:15:44.773 "num_base_bdevs": 4, 00:15:44.773 "num_base_bdevs_discovered": 1, 00:15:44.773 "num_base_bdevs_operational": 4, 00:15:44.773 "base_bdevs_list": [ 00:15:44.773 { 00:15:44.773 "name": "BaseBdev1", 00:15:44.773 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:44.773 "is_configured": true, 00:15:44.773 "data_offset": 0, 00:15:44.773 "data_size": 65536 00:15:44.773 }, 00:15:44.773 { 00:15:44.773 "name": "BaseBdev2", 00:15:44.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.773 "is_configured": false, 00:15:44.773 "data_offset": 0, 00:15:44.773 "data_size": 0 00:15:44.773 }, 00:15:44.773 { 00:15:44.773 "name": "BaseBdev3", 00:15:44.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.773 "is_configured": false, 00:15:44.773 "data_offset": 0, 00:15:44.773 "data_size": 0 00:15:44.773 }, 00:15:44.773 { 00:15:44.773 "name": "BaseBdev4", 00:15:44.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.773 "is_configured": false, 00:15:44.773 "data_offset": 0, 00:15:44.773 "data_size": 0 00:15:44.773 } 00:15:44.773 ] 00:15:44.773 }' 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.773 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.032 09:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.032 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.032 09:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.032 [2024-12-12 09:29:19.022401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.032 BaseBdev2 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.032 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.033 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.033 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.033 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.033 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.033 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.033 [ 00:15:45.033 { 00:15:45.033 "name": "BaseBdev2", 00:15:45.033 "aliases": [ 00:15:45.033 "ba328faf-90e4-4037-be31-b5759de36763" 00:15:45.033 ], 00:15:45.033 "product_name": "Malloc disk", 00:15:45.033 "block_size": 512, 00:15:45.033 "num_blocks": 65536, 00:15:45.033 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:45.033 "assigned_rate_limits": { 00:15:45.033 "rw_ios_per_sec": 0, 00:15:45.033 "rw_mbytes_per_sec": 0, 00:15:45.033 "r_mbytes_per_sec": 0, 00:15:45.033 "w_mbytes_per_sec": 0 00:15:45.033 }, 00:15:45.033 "claimed": true, 00:15:45.033 "claim_type": "exclusive_write", 00:15:45.033 "zoned": false, 00:15:45.033 "supported_io_types": { 00:15:45.033 "read": true, 00:15:45.033 "write": true, 00:15:45.033 "unmap": true, 00:15:45.033 "flush": true, 00:15:45.033 "reset": true, 00:15:45.033 "nvme_admin": false, 00:15:45.033 "nvme_io": false, 00:15:45.033 "nvme_io_md": false, 00:15:45.033 "write_zeroes": true, 00:15:45.033 "zcopy": true, 00:15:45.033 "get_zone_info": false, 00:15:45.292 "zone_management": false, 00:15:45.292 "zone_append": false, 00:15:45.292 "compare": false, 00:15:45.292 "compare_and_write": false, 00:15:45.292 "abort": true, 00:15:45.292 "seek_hole": false, 00:15:45.292 "seek_data": false, 00:15:45.292 "copy": true, 00:15:45.292 "nvme_iov_md": false 00:15:45.292 }, 00:15:45.292 "memory_domains": [ 00:15:45.292 { 00:15:45.292 "dma_device_id": "system", 00:15:45.292 "dma_device_type": 1 00:15:45.292 }, 00:15:45.292 { 00:15:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.292 "dma_device_type": 2 00:15:45.292 } 00:15:45.292 ], 00:15:45.292 "driver_specific": {} 00:15:45.292 } 00:15:45.292 ] 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.292 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.292 "name": "Existed_Raid", 00:15:45.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.292 "strip_size_kb": 64, 00:15:45.292 "state": "configuring", 00:15:45.292 "raid_level": "raid5f", 00:15:45.292 "superblock": false, 00:15:45.292 "num_base_bdevs": 4, 00:15:45.292 "num_base_bdevs_discovered": 2, 00:15:45.292 "num_base_bdevs_operational": 4, 00:15:45.292 "base_bdevs_list": [ 00:15:45.292 { 00:15:45.292 "name": "BaseBdev1", 00:15:45.292 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:45.292 "is_configured": true, 00:15:45.293 "data_offset": 0, 00:15:45.293 "data_size": 65536 00:15:45.293 }, 00:15:45.293 { 00:15:45.293 "name": "BaseBdev2", 00:15:45.293 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:45.293 "is_configured": true, 00:15:45.293 "data_offset": 0, 00:15:45.293 "data_size": 65536 00:15:45.293 }, 00:15:45.293 { 00:15:45.293 "name": "BaseBdev3", 00:15:45.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.293 "is_configured": false, 00:15:45.293 "data_offset": 0, 00:15:45.293 "data_size": 0 00:15:45.293 }, 00:15:45.293 { 00:15:45.293 "name": "BaseBdev4", 00:15:45.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.293 "is_configured": false, 00:15:45.293 "data_offset": 0, 00:15:45.293 "data_size": 0 00:15:45.293 } 00:15:45.293 ] 00:15:45.293 }' 00:15:45.293 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.293 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.553 [2024-12-12 09:29:19.510748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.553 BaseBdev3 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.553 [ 00:15:45.553 { 00:15:45.553 "name": "BaseBdev3", 00:15:45.553 "aliases": [ 00:15:45.553 "dd9d1cff-a9dc-472f-bd92-012809fe8c1b" 00:15:45.553 ], 00:15:45.553 "product_name": "Malloc disk", 00:15:45.553 "block_size": 512, 00:15:45.553 "num_blocks": 65536, 00:15:45.553 "uuid": "dd9d1cff-a9dc-472f-bd92-012809fe8c1b", 00:15:45.553 "assigned_rate_limits": { 00:15:45.553 "rw_ios_per_sec": 0, 00:15:45.553 "rw_mbytes_per_sec": 0, 00:15:45.553 "r_mbytes_per_sec": 0, 00:15:45.553 "w_mbytes_per_sec": 0 00:15:45.553 }, 00:15:45.553 "claimed": true, 00:15:45.553 "claim_type": "exclusive_write", 00:15:45.553 "zoned": false, 00:15:45.553 "supported_io_types": { 00:15:45.553 "read": true, 00:15:45.553 "write": true, 00:15:45.553 "unmap": true, 00:15:45.553 "flush": true, 00:15:45.553 "reset": true, 00:15:45.553 "nvme_admin": false, 00:15:45.553 "nvme_io": false, 00:15:45.553 "nvme_io_md": false, 00:15:45.553 "write_zeroes": true, 00:15:45.553 "zcopy": true, 00:15:45.553 "get_zone_info": false, 00:15:45.553 "zone_management": false, 00:15:45.553 "zone_append": false, 00:15:45.553 "compare": false, 00:15:45.553 "compare_and_write": false, 00:15:45.553 "abort": true, 00:15:45.553 "seek_hole": false, 00:15:45.553 "seek_data": false, 00:15:45.553 "copy": true, 00:15:45.553 "nvme_iov_md": false 00:15:45.553 }, 00:15:45.553 "memory_domains": [ 00:15:45.553 { 00:15:45.553 "dma_device_id": "system", 00:15:45.553 "dma_device_type": 1 00:15:45.553 }, 00:15:45.553 { 00:15:45.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.553 "dma_device_type": 2 00:15:45.553 } 00:15:45.553 ], 00:15:45.553 "driver_specific": {} 00:15:45.553 } 00:15:45.553 ] 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.553 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.813 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.813 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.813 "name": "Existed_Raid", 00:15:45.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.813 "strip_size_kb": 64, 00:15:45.813 "state": "configuring", 00:15:45.813 "raid_level": "raid5f", 00:15:45.813 "superblock": false, 00:15:45.813 "num_base_bdevs": 4, 00:15:45.813 "num_base_bdevs_discovered": 3, 00:15:45.813 "num_base_bdevs_operational": 4, 00:15:45.813 "base_bdevs_list": [ 00:15:45.813 { 00:15:45.813 "name": "BaseBdev1", 00:15:45.813 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:45.813 "is_configured": true, 00:15:45.813 "data_offset": 0, 00:15:45.813 "data_size": 65536 00:15:45.813 }, 00:15:45.813 { 00:15:45.813 "name": "BaseBdev2", 00:15:45.813 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:45.813 "is_configured": true, 00:15:45.813 "data_offset": 0, 00:15:45.813 "data_size": 65536 00:15:45.813 }, 00:15:45.813 { 00:15:45.813 "name": "BaseBdev3", 00:15:45.813 "uuid": "dd9d1cff-a9dc-472f-bd92-012809fe8c1b", 00:15:45.813 "is_configured": true, 00:15:45.813 "data_offset": 0, 00:15:45.813 "data_size": 65536 00:15:45.813 }, 00:15:45.813 { 00:15:45.813 "name": "BaseBdev4", 00:15:45.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.813 "is_configured": false, 00:15:45.813 "data_offset": 0, 00:15:45.813 "data_size": 0 00:15:45.813 } 00:15:45.813 ] 00:15:45.813 }' 00:15:45.813 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.813 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 09:29:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:46.072 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.072 09:29:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 [2024-12-12 09:29:20.024177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.072 [2024-12-12 09:29:20.024325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:46.072 [2024-12-12 09:29:20.024355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:46.072 [2024-12-12 09:29:20.024675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:46.072 [2024-12-12 09:29:20.031323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:46.072 [2024-12-12 09:29:20.031388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:46.072 [2024-12-12 09:29:20.031718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.072 BaseBdev4 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.072 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 [ 00:15:46.072 { 00:15:46.072 "name": "BaseBdev4", 00:15:46.072 "aliases": [ 00:15:46.072 "a93110ef-9dab-43bc-8296-549e89f610fb" 00:15:46.072 ], 00:15:46.072 "product_name": "Malloc disk", 00:15:46.072 "block_size": 512, 00:15:46.073 "num_blocks": 65536, 00:15:46.073 "uuid": "a93110ef-9dab-43bc-8296-549e89f610fb", 00:15:46.073 "assigned_rate_limits": { 00:15:46.073 "rw_ios_per_sec": 0, 00:15:46.073 "rw_mbytes_per_sec": 0, 00:15:46.073 "r_mbytes_per_sec": 0, 00:15:46.073 "w_mbytes_per_sec": 0 00:15:46.073 }, 00:15:46.073 "claimed": true, 00:15:46.073 "claim_type": "exclusive_write", 00:15:46.073 "zoned": false, 00:15:46.073 "supported_io_types": { 00:15:46.073 "read": true, 00:15:46.073 "write": true, 00:15:46.073 "unmap": true, 00:15:46.073 "flush": true, 00:15:46.073 "reset": true, 00:15:46.073 "nvme_admin": false, 00:15:46.073 "nvme_io": false, 00:15:46.073 "nvme_io_md": false, 00:15:46.073 "write_zeroes": true, 00:15:46.073 "zcopy": true, 00:15:46.073 "get_zone_info": false, 00:15:46.073 "zone_management": false, 00:15:46.073 "zone_append": false, 00:15:46.073 "compare": false, 00:15:46.073 "compare_and_write": false, 00:15:46.073 "abort": true, 00:15:46.073 "seek_hole": false, 00:15:46.073 "seek_data": false, 00:15:46.073 "copy": true, 00:15:46.073 "nvme_iov_md": false 00:15:46.073 }, 00:15:46.073 "memory_domains": [ 00:15:46.073 { 00:15:46.073 "dma_device_id": "system", 00:15:46.073 "dma_device_type": 1 00:15:46.073 }, 00:15:46.073 { 00:15:46.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.073 "dma_device_type": 2 00:15:46.073 } 00:15:46.073 ], 00:15:46.073 "driver_specific": {} 00:15:46.073 } 00:15:46.073 ] 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.073 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.333 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.333 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.333 "name": "Existed_Raid", 00:15:46.333 "uuid": "4c21f4e5-3e03-4f09-8663-e99616b2ad57", 00:15:46.333 "strip_size_kb": 64, 00:15:46.333 "state": "online", 00:15:46.333 "raid_level": "raid5f", 00:15:46.333 "superblock": false, 00:15:46.333 "num_base_bdevs": 4, 00:15:46.333 "num_base_bdevs_discovered": 4, 00:15:46.333 "num_base_bdevs_operational": 4, 00:15:46.333 "base_bdevs_list": [ 00:15:46.333 { 00:15:46.333 "name": "BaseBdev1", 00:15:46.333 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:46.333 "is_configured": true, 00:15:46.333 "data_offset": 0, 00:15:46.333 "data_size": 65536 00:15:46.333 }, 00:15:46.333 { 00:15:46.333 "name": "BaseBdev2", 00:15:46.333 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:46.333 "is_configured": true, 00:15:46.333 "data_offset": 0, 00:15:46.333 "data_size": 65536 00:15:46.333 }, 00:15:46.333 { 00:15:46.333 "name": "BaseBdev3", 00:15:46.333 "uuid": "dd9d1cff-a9dc-472f-bd92-012809fe8c1b", 00:15:46.333 "is_configured": true, 00:15:46.333 "data_offset": 0, 00:15:46.333 "data_size": 65536 00:15:46.333 }, 00:15:46.333 { 00:15:46.333 "name": "BaseBdev4", 00:15:46.333 "uuid": "a93110ef-9dab-43bc-8296-549e89f610fb", 00:15:46.333 "is_configured": true, 00:15:46.333 "data_offset": 0, 00:15:46.333 "data_size": 65536 00:15:46.333 } 00:15:46.333 ] 00:15:46.333 }' 00:15:46.333 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.333 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.592 [2024-12-12 09:29:20.508020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.592 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.592 "name": "Existed_Raid", 00:15:46.592 "aliases": [ 00:15:46.592 "4c21f4e5-3e03-4f09-8663-e99616b2ad57" 00:15:46.592 ], 00:15:46.592 "product_name": "Raid Volume", 00:15:46.592 "block_size": 512, 00:15:46.592 "num_blocks": 196608, 00:15:46.592 "uuid": "4c21f4e5-3e03-4f09-8663-e99616b2ad57", 00:15:46.592 "assigned_rate_limits": { 00:15:46.592 "rw_ios_per_sec": 0, 00:15:46.592 "rw_mbytes_per_sec": 0, 00:15:46.592 "r_mbytes_per_sec": 0, 00:15:46.592 "w_mbytes_per_sec": 0 00:15:46.593 }, 00:15:46.593 "claimed": false, 00:15:46.593 "zoned": false, 00:15:46.593 "supported_io_types": { 00:15:46.593 "read": true, 00:15:46.593 "write": true, 00:15:46.593 "unmap": false, 00:15:46.593 "flush": false, 00:15:46.593 "reset": true, 00:15:46.593 "nvme_admin": false, 00:15:46.593 "nvme_io": false, 00:15:46.593 "nvme_io_md": false, 00:15:46.593 "write_zeroes": true, 00:15:46.593 "zcopy": false, 00:15:46.593 "get_zone_info": false, 00:15:46.593 "zone_management": false, 00:15:46.593 "zone_append": false, 00:15:46.593 "compare": false, 00:15:46.593 "compare_and_write": false, 00:15:46.593 "abort": false, 00:15:46.593 "seek_hole": false, 00:15:46.593 "seek_data": false, 00:15:46.593 "copy": false, 00:15:46.593 "nvme_iov_md": false 00:15:46.593 }, 00:15:46.593 "driver_specific": { 00:15:46.593 "raid": { 00:15:46.593 "uuid": "4c21f4e5-3e03-4f09-8663-e99616b2ad57", 00:15:46.593 "strip_size_kb": 64, 00:15:46.593 "state": "online", 00:15:46.593 "raid_level": "raid5f", 00:15:46.593 "superblock": false, 00:15:46.593 "num_base_bdevs": 4, 00:15:46.593 "num_base_bdevs_discovered": 4, 00:15:46.593 "num_base_bdevs_operational": 4, 00:15:46.593 "base_bdevs_list": [ 00:15:46.593 { 00:15:46.593 "name": "BaseBdev1", 00:15:46.593 "uuid": "f29a2178-b985-4f18-99e6-f48b7ac04685", 00:15:46.593 "is_configured": true, 00:15:46.593 "data_offset": 0, 00:15:46.593 "data_size": 65536 00:15:46.593 }, 00:15:46.593 { 00:15:46.593 "name": "BaseBdev2", 00:15:46.593 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:46.593 "is_configured": true, 00:15:46.593 "data_offset": 0, 00:15:46.593 "data_size": 65536 00:15:46.593 }, 00:15:46.593 { 00:15:46.593 "name": "BaseBdev3", 00:15:46.593 "uuid": "dd9d1cff-a9dc-472f-bd92-012809fe8c1b", 00:15:46.593 "is_configured": true, 00:15:46.593 "data_offset": 0, 00:15:46.593 "data_size": 65536 00:15:46.593 }, 00:15:46.593 { 00:15:46.593 "name": "BaseBdev4", 00:15:46.593 "uuid": "a93110ef-9dab-43bc-8296-549e89f610fb", 00:15:46.593 "is_configured": true, 00:15:46.593 "data_offset": 0, 00:15:46.593 "data_size": 65536 00:15:46.593 } 00:15:46.593 ] 00:15:46.593 } 00:15:46.593 } 00:15:46.593 }' 00:15:46.593 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.593 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:46.593 BaseBdev2 00:15:46.593 BaseBdev3 00:15:46.593 BaseBdev4' 00:15:46.593 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.852 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 [2024-12-12 09:29:20.859409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.114 09:29:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.114 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.114 "name": "Existed_Raid", 00:15:47.115 "uuid": "4c21f4e5-3e03-4f09-8663-e99616b2ad57", 00:15:47.115 "strip_size_kb": 64, 00:15:47.115 "state": "online", 00:15:47.115 "raid_level": "raid5f", 00:15:47.115 "superblock": false, 00:15:47.115 "num_base_bdevs": 4, 00:15:47.115 "num_base_bdevs_discovered": 3, 00:15:47.115 "num_base_bdevs_operational": 3, 00:15:47.115 "base_bdevs_list": [ 00:15:47.115 { 00:15:47.115 "name": null, 00:15:47.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.115 "is_configured": false, 00:15:47.115 "data_offset": 0, 00:15:47.115 "data_size": 65536 00:15:47.115 }, 00:15:47.115 { 00:15:47.115 "name": "BaseBdev2", 00:15:47.115 "uuid": "ba328faf-90e4-4037-be31-b5759de36763", 00:15:47.115 "is_configured": true, 00:15:47.115 "data_offset": 0, 00:15:47.115 "data_size": 65536 00:15:47.115 }, 00:15:47.115 { 00:15:47.115 "name": "BaseBdev3", 00:15:47.115 "uuid": "dd9d1cff-a9dc-472f-bd92-012809fe8c1b", 00:15:47.115 "is_configured": true, 00:15:47.115 "data_offset": 0, 00:15:47.115 "data_size": 65536 00:15:47.115 }, 00:15:47.115 { 00:15:47.115 "name": "BaseBdev4", 00:15:47.115 "uuid": "a93110ef-9dab-43bc-8296-549e89f610fb", 00:15:47.115 "is_configured": true, 00:15:47.115 "data_offset": 0, 00:15:47.115 "data_size": 65536 00:15:47.115 } 00:15:47.115 ] 00:15:47.115 }' 00:15:47.115 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.115 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.385 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.651 [2024-12-12 09:29:21.404098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.651 [2024-12-12 09:29:21.404316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.651 [2024-12-12 09:29:21.503724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.651 [2024-12-12 09:29:21.563649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.651 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.652 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.652 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.652 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.652 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 [2024-12-12 09:29:21.720448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:47.912 [2024-12-12 09:29:21.720507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 BaseBdev2 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.912 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.172 [ 00:15:48.172 { 00:15:48.172 "name": "BaseBdev2", 00:15:48.172 "aliases": [ 00:15:48.172 "59d1075f-ed35-4491-820d-778d761cb6b1" 00:15:48.172 ], 00:15:48.172 "product_name": "Malloc disk", 00:15:48.172 "block_size": 512, 00:15:48.172 "num_blocks": 65536, 00:15:48.172 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:48.172 "assigned_rate_limits": { 00:15:48.172 "rw_ios_per_sec": 0, 00:15:48.172 "rw_mbytes_per_sec": 0, 00:15:48.172 "r_mbytes_per_sec": 0, 00:15:48.172 "w_mbytes_per_sec": 0 00:15:48.172 }, 00:15:48.172 "claimed": false, 00:15:48.172 "zoned": false, 00:15:48.172 "supported_io_types": { 00:15:48.172 "read": true, 00:15:48.172 "write": true, 00:15:48.172 "unmap": true, 00:15:48.172 "flush": true, 00:15:48.172 "reset": true, 00:15:48.172 "nvme_admin": false, 00:15:48.172 "nvme_io": false, 00:15:48.172 "nvme_io_md": false, 00:15:48.172 "write_zeroes": true, 00:15:48.172 "zcopy": true, 00:15:48.172 "get_zone_info": false, 00:15:48.172 "zone_management": false, 00:15:48.172 "zone_append": false, 00:15:48.172 "compare": false, 00:15:48.172 "compare_and_write": false, 00:15:48.172 "abort": true, 00:15:48.172 "seek_hole": false, 00:15:48.172 "seek_data": false, 00:15:48.172 "copy": true, 00:15:48.172 "nvme_iov_md": false 00:15:48.172 }, 00:15:48.172 "memory_domains": [ 00:15:48.172 { 00:15:48.172 "dma_device_id": "system", 00:15:48.172 "dma_device_type": 1 00:15:48.172 }, 00:15:48.172 { 00:15:48.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.173 "dma_device_type": 2 00:15:48.173 } 00:15:48.173 ], 00:15:48.173 "driver_specific": {} 00:15:48.173 } 00:15:48.173 ] 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 BaseBdev3 00:15:48.173 09:29:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 [ 00:15:48.173 { 00:15:48.173 "name": "BaseBdev3", 00:15:48.173 "aliases": [ 00:15:48.173 "750256d9-10e9-4be6-93b5-035f28e99e7d" 00:15:48.173 ], 00:15:48.173 "product_name": "Malloc disk", 00:15:48.173 "block_size": 512, 00:15:48.173 "num_blocks": 65536, 00:15:48.173 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:48.173 "assigned_rate_limits": { 00:15:48.173 "rw_ios_per_sec": 0, 00:15:48.173 "rw_mbytes_per_sec": 0, 00:15:48.173 "r_mbytes_per_sec": 0, 00:15:48.173 "w_mbytes_per_sec": 0 00:15:48.173 }, 00:15:48.173 "claimed": false, 00:15:48.173 "zoned": false, 00:15:48.173 "supported_io_types": { 00:15:48.173 "read": true, 00:15:48.173 "write": true, 00:15:48.173 "unmap": true, 00:15:48.173 "flush": true, 00:15:48.173 "reset": true, 00:15:48.173 "nvme_admin": false, 00:15:48.173 "nvme_io": false, 00:15:48.173 "nvme_io_md": false, 00:15:48.173 "write_zeroes": true, 00:15:48.173 "zcopy": true, 00:15:48.173 "get_zone_info": false, 00:15:48.173 "zone_management": false, 00:15:48.173 "zone_append": false, 00:15:48.173 "compare": false, 00:15:48.173 "compare_and_write": false, 00:15:48.173 "abort": true, 00:15:48.173 "seek_hole": false, 00:15:48.173 "seek_data": false, 00:15:48.173 "copy": true, 00:15:48.173 "nvme_iov_md": false 00:15:48.173 }, 00:15:48.173 "memory_domains": [ 00:15:48.173 { 00:15:48.173 "dma_device_id": "system", 00:15:48.173 "dma_device_type": 1 00:15:48.173 }, 00:15:48.173 { 00:15:48.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.173 "dma_device_type": 2 00:15:48.173 } 00:15:48.173 ], 00:15:48.173 "driver_specific": {} 00:15:48.173 } 00:15:48.173 ] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 BaseBdev4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 [ 00:15:48.173 { 00:15:48.173 "name": "BaseBdev4", 00:15:48.173 "aliases": [ 00:15:48.173 "9aea065c-41e9-452b-ad72-8b9dc5803c7f" 00:15:48.173 ], 00:15:48.173 "product_name": "Malloc disk", 00:15:48.173 "block_size": 512, 00:15:48.173 "num_blocks": 65536, 00:15:48.173 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:48.173 "assigned_rate_limits": { 00:15:48.173 "rw_ios_per_sec": 0, 00:15:48.173 "rw_mbytes_per_sec": 0, 00:15:48.173 "r_mbytes_per_sec": 0, 00:15:48.173 "w_mbytes_per_sec": 0 00:15:48.173 }, 00:15:48.173 "claimed": false, 00:15:48.173 "zoned": false, 00:15:48.173 "supported_io_types": { 00:15:48.173 "read": true, 00:15:48.173 "write": true, 00:15:48.173 "unmap": true, 00:15:48.173 "flush": true, 00:15:48.173 "reset": true, 00:15:48.173 "nvme_admin": false, 00:15:48.173 "nvme_io": false, 00:15:48.173 "nvme_io_md": false, 00:15:48.173 "write_zeroes": true, 00:15:48.173 "zcopy": true, 00:15:48.173 "get_zone_info": false, 00:15:48.173 "zone_management": false, 00:15:48.173 "zone_append": false, 00:15:48.173 "compare": false, 00:15:48.173 "compare_and_write": false, 00:15:48.173 "abort": true, 00:15:48.173 "seek_hole": false, 00:15:48.173 "seek_data": false, 00:15:48.173 "copy": true, 00:15:48.173 "nvme_iov_md": false 00:15:48.173 }, 00:15:48.173 "memory_domains": [ 00:15:48.173 { 00:15:48.173 "dma_device_id": "system", 00:15:48.173 "dma_device_type": 1 00:15:48.173 }, 00:15:48.173 { 00:15:48.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.173 "dma_device_type": 2 00:15:48.173 } 00:15:48.173 ], 00:15:48.173 "driver_specific": {} 00:15:48.173 } 00:15:48.173 ] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 [2024-12-12 09:29:22.129746] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.173 [2024-12-12 09:29:22.129862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.173 [2024-12-12 09:29:22.129905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.173 [2024-12-12 09:29:22.132046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.173 [2024-12-12 09:29:22.132142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.174 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.174 "name": "Existed_Raid", 00:15:48.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.174 "strip_size_kb": 64, 00:15:48.174 "state": "configuring", 00:15:48.174 "raid_level": "raid5f", 00:15:48.174 "superblock": false, 00:15:48.174 "num_base_bdevs": 4, 00:15:48.174 "num_base_bdevs_discovered": 3, 00:15:48.174 "num_base_bdevs_operational": 4, 00:15:48.174 "base_bdevs_list": [ 00:15:48.174 { 00:15:48.174 "name": "BaseBdev1", 00:15:48.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.174 "is_configured": false, 00:15:48.174 "data_offset": 0, 00:15:48.174 "data_size": 0 00:15:48.174 }, 00:15:48.174 { 00:15:48.174 "name": "BaseBdev2", 00:15:48.174 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:48.174 "is_configured": true, 00:15:48.174 "data_offset": 0, 00:15:48.174 "data_size": 65536 00:15:48.174 }, 00:15:48.174 { 00:15:48.174 "name": "BaseBdev3", 00:15:48.174 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:48.174 "is_configured": true, 00:15:48.174 "data_offset": 0, 00:15:48.174 "data_size": 65536 00:15:48.174 }, 00:15:48.174 { 00:15:48.174 "name": "BaseBdev4", 00:15:48.174 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:48.174 "is_configured": true, 00:15:48.174 "data_offset": 0, 00:15:48.174 "data_size": 65536 00:15:48.174 } 00:15:48.174 ] 00:15:48.174 }' 00:15:48.174 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.174 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.746 [2024-12-12 09:29:22.572972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.746 "name": "Existed_Raid", 00:15:48.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.746 "strip_size_kb": 64, 00:15:48.746 "state": "configuring", 00:15:48.746 "raid_level": "raid5f", 00:15:48.746 "superblock": false, 00:15:48.746 "num_base_bdevs": 4, 00:15:48.746 "num_base_bdevs_discovered": 2, 00:15:48.746 "num_base_bdevs_operational": 4, 00:15:48.746 "base_bdevs_list": [ 00:15:48.746 { 00:15:48.746 "name": "BaseBdev1", 00:15:48.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.746 "is_configured": false, 00:15:48.746 "data_offset": 0, 00:15:48.746 "data_size": 0 00:15:48.746 }, 00:15:48.746 { 00:15:48.746 "name": null, 00:15:48.746 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:48.746 "is_configured": false, 00:15:48.746 "data_offset": 0, 00:15:48.746 "data_size": 65536 00:15:48.746 }, 00:15:48.746 { 00:15:48.746 "name": "BaseBdev3", 00:15:48.746 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:48.746 "is_configured": true, 00:15:48.746 "data_offset": 0, 00:15:48.746 "data_size": 65536 00:15:48.746 }, 00:15:48.746 { 00:15:48.746 "name": "BaseBdev4", 00:15:48.746 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:48.746 "is_configured": true, 00:15:48.746 "data_offset": 0, 00:15:48.746 "data_size": 65536 00:15:48.746 } 00:15:48.746 ] 00:15:48.746 }' 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.746 09:29:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 [2024-12-12 09:29:23.152829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.316 BaseBdev1 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 [ 00:15:49.316 { 00:15:49.316 "name": "BaseBdev1", 00:15:49.316 "aliases": [ 00:15:49.316 "93931865-fcc7-42de-bbd1-fbbf31701f82" 00:15:49.316 ], 00:15:49.316 "product_name": "Malloc disk", 00:15:49.316 "block_size": 512, 00:15:49.316 "num_blocks": 65536, 00:15:49.316 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:49.316 "assigned_rate_limits": { 00:15:49.316 "rw_ios_per_sec": 0, 00:15:49.316 "rw_mbytes_per_sec": 0, 00:15:49.316 "r_mbytes_per_sec": 0, 00:15:49.316 "w_mbytes_per_sec": 0 00:15:49.316 }, 00:15:49.316 "claimed": true, 00:15:49.316 "claim_type": "exclusive_write", 00:15:49.316 "zoned": false, 00:15:49.316 "supported_io_types": { 00:15:49.316 "read": true, 00:15:49.316 "write": true, 00:15:49.316 "unmap": true, 00:15:49.316 "flush": true, 00:15:49.316 "reset": true, 00:15:49.316 "nvme_admin": false, 00:15:49.316 "nvme_io": false, 00:15:49.316 "nvme_io_md": false, 00:15:49.316 "write_zeroes": true, 00:15:49.316 "zcopy": true, 00:15:49.316 "get_zone_info": false, 00:15:49.316 "zone_management": false, 00:15:49.316 "zone_append": false, 00:15:49.316 "compare": false, 00:15:49.316 "compare_and_write": false, 00:15:49.316 "abort": true, 00:15:49.316 "seek_hole": false, 00:15:49.316 "seek_data": false, 00:15:49.316 "copy": true, 00:15:49.316 "nvme_iov_md": false 00:15:49.316 }, 00:15:49.316 "memory_domains": [ 00:15:49.316 { 00:15:49.316 "dma_device_id": "system", 00:15:49.316 "dma_device_type": 1 00:15:49.316 }, 00:15:49.316 { 00:15:49.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.316 "dma_device_type": 2 00:15:49.316 } 00:15:49.316 ], 00:15:49.316 "driver_specific": {} 00:15:49.316 } 00:15:49.316 ] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.316 "name": "Existed_Raid", 00:15:49.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.316 "strip_size_kb": 64, 00:15:49.316 "state": "configuring", 00:15:49.316 "raid_level": "raid5f", 00:15:49.316 "superblock": false, 00:15:49.316 "num_base_bdevs": 4, 00:15:49.316 "num_base_bdevs_discovered": 3, 00:15:49.316 "num_base_bdevs_operational": 4, 00:15:49.316 "base_bdevs_list": [ 00:15:49.316 { 00:15:49.316 "name": "BaseBdev1", 00:15:49.316 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:49.316 "is_configured": true, 00:15:49.316 "data_offset": 0, 00:15:49.316 "data_size": 65536 00:15:49.316 }, 00:15:49.316 { 00:15:49.316 "name": null, 00:15:49.316 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:49.316 "is_configured": false, 00:15:49.316 "data_offset": 0, 00:15:49.316 "data_size": 65536 00:15:49.316 }, 00:15:49.316 { 00:15:49.316 "name": "BaseBdev3", 00:15:49.316 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:49.316 "is_configured": true, 00:15:49.316 "data_offset": 0, 00:15:49.316 "data_size": 65536 00:15:49.316 }, 00:15:49.316 { 00:15:49.316 "name": "BaseBdev4", 00:15:49.316 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:49.316 "is_configured": true, 00:15:49.316 "data_offset": 0, 00:15:49.316 "data_size": 65536 00:15:49.316 } 00:15:49.316 ] 00:15:49.316 }' 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.316 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.884 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.884 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.884 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.884 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.884 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.885 [2024-12-12 09:29:23.683974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.885 "name": "Existed_Raid", 00:15:49.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.885 "strip_size_kb": 64, 00:15:49.885 "state": "configuring", 00:15:49.885 "raid_level": "raid5f", 00:15:49.885 "superblock": false, 00:15:49.885 "num_base_bdevs": 4, 00:15:49.885 "num_base_bdevs_discovered": 2, 00:15:49.885 "num_base_bdevs_operational": 4, 00:15:49.885 "base_bdevs_list": [ 00:15:49.885 { 00:15:49.885 "name": "BaseBdev1", 00:15:49.885 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:49.885 "is_configured": true, 00:15:49.885 "data_offset": 0, 00:15:49.885 "data_size": 65536 00:15:49.885 }, 00:15:49.885 { 00:15:49.885 "name": null, 00:15:49.885 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:49.885 "is_configured": false, 00:15:49.885 "data_offset": 0, 00:15:49.885 "data_size": 65536 00:15:49.885 }, 00:15:49.885 { 00:15:49.885 "name": null, 00:15:49.885 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:49.885 "is_configured": false, 00:15:49.885 "data_offset": 0, 00:15:49.885 "data_size": 65536 00:15:49.885 }, 00:15:49.885 { 00:15:49.885 "name": "BaseBdev4", 00:15:49.885 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:49.885 "is_configured": true, 00:15:49.885 "data_offset": 0, 00:15:49.885 "data_size": 65536 00:15:49.885 } 00:15:49.885 ] 00:15:49.885 }' 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.885 09:29:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.144 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.144 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.144 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.402 [2024-12-12 09:29:24.191596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.402 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.402 "name": "Existed_Raid", 00:15:50.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.402 "strip_size_kb": 64, 00:15:50.402 "state": "configuring", 00:15:50.402 "raid_level": "raid5f", 00:15:50.402 "superblock": false, 00:15:50.402 "num_base_bdevs": 4, 00:15:50.402 "num_base_bdevs_discovered": 3, 00:15:50.402 "num_base_bdevs_operational": 4, 00:15:50.402 "base_bdevs_list": [ 00:15:50.402 { 00:15:50.402 "name": "BaseBdev1", 00:15:50.402 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:50.402 "is_configured": true, 00:15:50.402 "data_offset": 0, 00:15:50.402 "data_size": 65536 00:15:50.402 }, 00:15:50.402 { 00:15:50.402 "name": null, 00:15:50.402 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:50.402 "is_configured": false, 00:15:50.402 "data_offset": 0, 00:15:50.402 "data_size": 65536 00:15:50.402 }, 00:15:50.402 { 00:15:50.402 "name": "BaseBdev3", 00:15:50.403 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:50.403 "is_configured": true, 00:15:50.403 "data_offset": 0, 00:15:50.403 "data_size": 65536 00:15:50.403 }, 00:15:50.403 { 00:15:50.403 "name": "BaseBdev4", 00:15:50.403 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:50.403 "is_configured": true, 00:15:50.403 "data_offset": 0, 00:15:50.403 "data_size": 65536 00:15:50.403 } 00:15:50.403 ] 00:15:50.403 }' 00:15:50.403 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.403 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.662 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.662 [2024-12-12 09:29:24.678809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.922 "name": "Existed_Raid", 00:15:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.922 "strip_size_kb": 64, 00:15:50.922 "state": "configuring", 00:15:50.922 "raid_level": "raid5f", 00:15:50.922 "superblock": false, 00:15:50.922 "num_base_bdevs": 4, 00:15:50.922 "num_base_bdevs_discovered": 2, 00:15:50.922 "num_base_bdevs_operational": 4, 00:15:50.922 "base_bdevs_list": [ 00:15:50.922 { 00:15:50.922 "name": null, 00:15:50.922 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:50.922 "is_configured": false, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": null, 00:15:50.922 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:50.922 "is_configured": false, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": "BaseBdev3", 00:15:50.922 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:50.922 "is_configured": true, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 }, 00:15:50.922 { 00:15:50.922 "name": "BaseBdev4", 00:15:50.922 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:50.922 "is_configured": true, 00:15:50.922 "data_offset": 0, 00:15:50.922 "data_size": 65536 00:15:50.922 } 00:15:50.922 ] 00:15:50.922 }' 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.922 09:29:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.181 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.181 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.181 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.440 [2024-12-12 09:29:25.220499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.440 "name": "Existed_Raid", 00:15:51.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.440 "strip_size_kb": 64, 00:15:51.440 "state": "configuring", 00:15:51.440 "raid_level": "raid5f", 00:15:51.440 "superblock": false, 00:15:51.440 "num_base_bdevs": 4, 00:15:51.440 "num_base_bdevs_discovered": 3, 00:15:51.440 "num_base_bdevs_operational": 4, 00:15:51.440 "base_bdevs_list": [ 00:15:51.440 { 00:15:51.440 "name": null, 00:15:51.440 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:51.440 "is_configured": false, 00:15:51.440 "data_offset": 0, 00:15:51.440 "data_size": 65536 00:15:51.440 }, 00:15:51.440 { 00:15:51.440 "name": "BaseBdev2", 00:15:51.440 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:51.440 "is_configured": true, 00:15:51.440 "data_offset": 0, 00:15:51.440 "data_size": 65536 00:15:51.440 }, 00:15:51.440 { 00:15:51.440 "name": "BaseBdev3", 00:15:51.440 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:51.440 "is_configured": true, 00:15:51.440 "data_offset": 0, 00:15:51.440 "data_size": 65536 00:15:51.440 }, 00:15:51.440 { 00:15:51.440 "name": "BaseBdev4", 00:15:51.440 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:51.440 "is_configured": true, 00:15:51.440 "data_offset": 0, 00:15:51.440 "data_size": 65536 00:15:51.440 } 00:15:51.440 ] 00:15:51.440 }' 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.440 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.700 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.959 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:51.959 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.959 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 93931865-fcc7-42de-bbd1-fbbf31701f82 00:15:51.959 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.959 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.959 [2024-12-12 09:29:25.807158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.960 [2024-12-12 09:29:25.807273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.960 [2024-12-12 09:29:25.807298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:51.960 [2024-12-12 09:29:25.807631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.960 [2024-12-12 09:29:25.814224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.960 [2024-12-12 09:29:25.814249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:51.960 [2024-12-12 09:29:25.814515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.960 NewBaseBdev 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.960 [ 00:15:51.960 { 00:15:51.960 "name": "NewBaseBdev", 00:15:51.960 "aliases": [ 00:15:51.960 "93931865-fcc7-42de-bbd1-fbbf31701f82" 00:15:51.960 ], 00:15:51.960 "product_name": "Malloc disk", 00:15:51.960 "block_size": 512, 00:15:51.960 "num_blocks": 65536, 00:15:51.960 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:51.960 "assigned_rate_limits": { 00:15:51.960 "rw_ios_per_sec": 0, 00:15:51.960 "rw_mbytes_per_sec": 0, 00:15:51.960 "r_mbytes_per_sec": 0, 00:15:51.960 "w_mbytes_per_sec": 0 00:15:51.960 }, 00:15:51.960 "claimed": true, 00:15:51.960 "claim_type": "exclusive_write", 00:15:51.960 "zoned": false, 00:15:51.960 "supported_io_types": { 00:15:51.960 "read": true, 00:15:51.960 "write": true, 00:15:51.960 "unmap": true, 00:15:51.960 "flush": true, 00:15:51.960 "reset": true, 00:15:51.960 "nvme_admin": false, 00:15:51.960 "nvme_io": false, 00:15:51.960 "nvme_io_md": false, 00:15:51.960 "write_zeroes": true, 00:15:51.960 "zcopy": true, 00:15:51.960 "get_zone_info": false, 00:15:51.960 "zone_management": false, 00:15:51.960 "zone_append": false, 00:15:51.960 "compare": false, 00:15:51.960 "compare_and_write": false, 00:15:51.960 "abort": true, 00:15:51.960 "seek_hole": false, 00:15:51.960 "seek_data": false, 00:15:51.960 "copy": true, 00:15:51.960 "nvme_iov_md": false 00:15:51.960 }, 00:15:51.960 "memory_domains": [ 00:15:51.960 { 00:15:51.960 "dma_device_id": "system", 00:15:51.960 "dma_device_type": 1 00:15:51.960 }, 00:15:51.960 { 00:15:51.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.960 "dma_device_type": 2 00:15:51.960 } 00:15:51.960 ], 00:15:51.960 "driver_specific": {} 00:15:51.960 } 00:15:51.960 ] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.960 "name": "Existed_Raid", 00:15:51.960 "uuid": "6bae33c0-57f5-488f-b882-c3f4a9622c87", 00:15:51.960 "strip_size_kb": 64, 00:15:51.960 "state": "online", 00:15:51.960 "raid_level": "raid5f", 00:15:51.960 "superblock": false, 00:15:51.960 "num_base_bdevs": 4, 00:15:51.960 "num_base_bdevs_discovered": 4, 00:15:51.960 "num_base_bdevs_operational": 4, 00:15:51.960 "base_bdevs_list": [ 00:15:51.960 { 00:15:51.960 "name": "NewBaseBdev", 00:15:51.960 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:51.960 "is_configured": true, 00:15:51.960 "data_offset": 0, 00:15:51.960 "data_size": 65536 00:15:51.960 }, 00:15:51.960 { 00:15:51.960 "name": "BaseBdev2", 00:15:51.960 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:51.960 "is_configured": true, 00:15:51.960 "data_offset": 0, 00:15:51.960 "data_size": 65536 00:15:51.960 }, 00:15:51.960 { 00:15:51.960 "name": "BaseBdev3", 00:15:51.960 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:51.960 "is_configured": true, 00:15:51.960 "data_offset": 0, 00:15:51.960 "data_size": 65536 00:15:51.960 }, 00:15:51.960 { 00:15:51.960 "name": "BaseBdev4", 00:15:51.960 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:51.960 "is_configured": true, 00:15:51.960 "data_offset": 0, 00:15:51.960 "data_size": 65536 00:15:51.960 } 00:15:51.960 ] 00:15:51.960 }' 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.960 09:29:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.528 [2024-12-12 09:29:26.327036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.528 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.528 "name": "Existed_Raid", 00:15:52.528 "aliases": [ 00:15:52.528 "6bae33c0-57f5-488f-b882-c3f4a9622c87" 00:15:52.528 ], 00:15:52.529 "product_name": "Raid Volume", 00:15:52.529 "block_size": 512, 00:15:52.529 "num_blocks": 196608, 00:15:52.529 "uuid": "6bae33c0-57f5-488f-b882-c3f4a9622c87", 00:15:52.529 "assigned_rate_limits": { 00:15:52.529 "rw_ios_per_sec": 0, 00:15:52.529 "rw_mbytes_per_sec": 0, 00:15:52.529 "r_mbytes_per_sec": 0, 00:15:52.529 "w_mbytes_per_sec": 0 00:15:52.529 }, 00:15:52.529 "claimed": false, 00:15:52.529 "zoned": false, 00:15:52.529 "supported_io_types": { 00:15:52.529 "read": true, 00:15:52.529 "write": true, 00:15:52.529 "unmap": false, 00:15:52.529 "flush": false, 00:15:52.529 "reset": true, 00:15:52.529 "nvme_admin": false, 00:15:52.529 "nvme_io": false, 00:15:52.529 "nvme_io_md": false, 00:15:52.529 "write_zeroes": true, 00:15:52.529 "zcopy": false, 00:15:52.529 "get_zone_info": false, 00:15:52.529 "zone_management": false, 00:15:52.529 "zone_append": false, 00:15:52.529 "compare": false, 00:15:52.529 "compare_and_write": false, 00:15:52.529 "abort": false, 00:15:52.529 "seek_hole": false, 00:15:52.529 "seek_data": false, 00:15:52.529 "copy": false, 00:15:52.529 "nvme_iov_md": false 00:15:52.529 }, 00:15:52.529 "driver_specific": { 00:15:52.529 "raid": { 00:15:52.529 "uuid": "6bae33c0-57f5-488f-b882-c3f4a9622c87", 00:15:52.529 "strip_size_kb": 64, 00:15:52.529 "state": "online", 00:15:52.529 "raid_level": "raid5f", 00:15:52.529 "superblock": false, 00:15:52.529 "num_base_bdevs": 4, 00:15:52.529 "num_base_bdevs_discovered": 4, 00:15:52.529 "num_base_bdevs_operational": 4, 00:15:52.529 "base_bdevs_list": [ 00:15:52.529 { 00:15:52.529 "name": "NewBaseBdev", 00:15:52.529 "uuid": "93931865-fcc7-42de-bbd1-fbbf31701f82", 00:15:52.529 "is_configured": true, 00:15:52.529 "data_offset": 0, 00:15:52.529 "data_size": 65536 00:15:52.529 }, 00:15:52.529 { 00:15:52.529 "name": "BaseBdev2", 00:15:52.529 "uuid": "59d1075f-ed35-4491-820d-778d761cb6b1", 00:15:52.529 "is_configured": true, 00:15:52.529 "data_offset": 0, 00:15:52.529 "data_size": 65536 00:15:52.529 }, 00:15:52.529 { 00:15:52.529 "name": "BaseBdev3", 00:15:52.529 "uuid": "750256d9-10e9-4be6-93b5-035f28e99e7d", 00:15:52.529 "is_configured": true, 00:15:52.529 "data_offset": 0, 00:15:52.529 "data_size": 65536 00:15:52.529 }, 00:15:52.529 { 00:15:52.529 "name": "BaseBdev4", 00:15:52.529 "uuid": "9aea065c-41e9-452b-ad72-8b9dc5803c7f", 00:15:52.529 "is_configured": true, 00:15:52.529 "data_offset": 0, 00:15:52.529 "data_size": 65536 00:15:52.529 } 00:15:52.529 ] 00:15:52.529 } 00:15:52.529 } 00:15:52.529 }' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:52.529 BaseBdev2 00:15:52.529 BaseBdev3 00:15:52.529 BaseBdev4' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.529 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.788 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.788 [2024-12-12 09:29:26.630316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.788 [2024-12-12 09:29:26.630341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.788 [2024-12-12 09:29:26.630401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.789 [2024-12-12 09:29:26.630705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.789 [2024-12-12 09:29:26.630715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83904 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83904 ']' 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83904 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83904 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.789 killing process with pid 83904 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83904' 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83904 00:15:52.789 [2024-12-12 09:29:26.667354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.789 09:29:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83904 00:15:53.358 [2024-12-12 09:29:27.070169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:54.298 00:15:54.298 real 0m11.673s 00:15:54.298 user 0m18.261s 00:15:54.298 sys 0m2.330s 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.298 ************************************ 00:15:54.298 END TEST raid5f_state_function_test 00:15:54.298 ************************************ 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.298 09:29:28 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:54.298 09:29:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:54.298 09:29:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.298 09:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.298 ************************************ 00:15:54.298 START TEST raid5f_state_function_test_sb 00:15:54.298 ************************************ 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:54.298 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.558 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84577 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84577' 00:15:54.559 Process raid pid: 84577 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84577 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84577 ']' 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.559 09:29:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.559 [2024-12-12 09:29:28.433754] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:15:54.559 [2024-12-12 09:29:28.433887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.818 [2024-12-12 09:29:28.613391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.818 [2024-12-12 09:29:28.747421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.078 [2024-12-12 09:29:28.974996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.078 [2024-12-12 09:29:28.975030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.338 [2024-12-12 09:29:29.273212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.338 [2024-12-12 09:29:29.273338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.338 [2024-12-12 09:29:29.273367] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.338 [2024-12-12 09:29:29.273389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.338 [2024-12-12 09:29:29.273406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.338 [2024-12-12 09:29:29.273425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.338 [2024-12-12 09:29:29.273457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.338 [2024-12-12 09:29:29.273478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.338 "name": "Existed_Raid", 00:15:55.338 "uuid": "d211edc8-5c25-4359-b9d4-3d2289b6ac30", 00:15:55.338 "strip_size_kb": 64, 00:15:55.338 "state": "configuring", 00:15:55.338 "raid_level": "raid5f", 00:15:55.338 "superblock": true, 00:15:55.338 "num_base_bdevs": 4, 00:15:55.338 "num_base_bdevs_discovered": 0, 00:15:55.338 "num_base_bdevs_operational": 4, 00:15:55.338 "base_bdevs_list": [ 00:15:55.338 { 00:15:55.338 "name": "BaseBdev1", 00:15:55.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.338 "is_configured": false, 00:15:55.338 "data_offset": 0, 00:15:55.338 "data_size": 0 00:15:55.338 }, 00:15:55.338 { 00:15:55.338 "name": "BaseBdev2", 00:15:55.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.338 "is_configured": false, 00:15:55.338 "data_offset": 0, 00:15:55.338 "data_size": 0 00:15:55.338 }, 00:15:55.338 { 00:15:55.338 "name": "BaseBdev3", 00:15:55.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.338 "is_configured": false, 00:15:55.338 "data_offset": 0, 00:15:55.338 "data_size": 0 00:15:55.338 }, 00:15:55.338 { 00:15:55.338 "name": "BaseBdev4", 00:15:55.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.338 "is_configured": false, 00:15:55.338 "data_offset": 0, 00:15:55.338 "data_size": 0 00:15:55.338 } 00:15:55.338 ] 00:15:55.338 }' 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.338 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.908 [2024-12-12 09:29:29.684417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.908 [2024-12-12 09:29:29.684453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.908 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.908 [2024-12-12 09:29:29.696412] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.909 [2024-12-12 09:29:29.696504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.909 [2024-12-12 09:29:29.696529] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.909 [2024-12-12 09:29:29.696551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.909 [2024-12-12 09:29:29.696568] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.909 [2024-12-12 09:29:29.696589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.909 [2024-12-12 09:29:29.696605] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.909 [2024-12-12 09:29:29.696626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 [2024-12-12 09:29:29.748614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.909 BaseBdev1 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 [ 00:15:55.909 { 00:15:55.909 "name": "BaseBdev1", 00:15:55.909 "aliases": [ 00:15:55.909 "a3deae14-8242-42c3-98d7-5fee4ae49931" 00:15:55.909 ], 00:15:55.909 "product_name": "Malloc disk", 00:15:55.909 "block_size": 512, 00:15:55.909 "num_blocks": 65536, 00:15:55.909 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:55.909 "assigned_rate_limits": { 00:15:55.909 "rw_ios_per_sec": 0, 00:15:55.909 "rw_mbytes_per_sec": 0, 00:15:55.909 "r_mbytes_per_sec": 0, 00:15:55.909 "w_mbytes_per_sec": 0 00:15:55.909 }, 00:15:55.909 "claimed": true, 00:15:55.909 "claim_type": "exclusive_write", 00:15:55.909 "zoned": false, 00:15:55.909 "supported_io_types": { 00:15:55.909 "read": true, 00:15:55.909 "write": true, 00:15:55.909 "unmap": true, 00:15:55.909 "flush": true, 00:15:55.909 "reset": true, 00:15:55.909 "nvme_admin": false, 00:15:55.909 "nvme_io": false, 00:15:55.909 "nvme_io_md": false, 00:15:55.909 "write_zeroes": true, 00:15:55.909 "zcopy": true, 00:15:55.909 "get_zone_info": false, 00:15:55.909 "zone_management": false, 00:15:55.909 "zone_append": false, 00:15:55.909 "compare": false, 00:15:55.909 "compare_and_write": false, 00:15:55.909 "abort": true, 00:15:55.909 "seek_hole": false, 00:15:55.909 "seek_data": false, 00:15:55.909 "copy": true, 00:15:55.909 "nvme_iov_md": false 00:15:55.909 }, 00:15:55.909 "memory_domains": [ 00:15:55.909 { 00:15:55.909 "dma_device_id": "system", 00:15:55.909 "dma_device_type": 1 00:15:55.909 }, 00:15:55.909 { 00:15:55.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.909 "dma_device_type": 2 00:15:55.909 } 00:15:55.909 ], 00:15:55.909 "driver_specific": {} 00:15:55.909 } 00:15:55.909 ] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.909 "name": "Existed_Raid", 00:15:55.909 "uuid": "866e35d2-995d-4470-9284-762b4e4ac783", 00:15:55.909 "strip_size_kb": 64, 00:15:55.909 "state": "configuring", 00:15:55.909 "raid_level": "raid5f", 00:15:55.909 "superblock": true, 00:15:55.909 "num_base_bdevs": 4, 00:15:55.909 "num_base_bdevs_discovered": 1, 00:15:55.909 "num_base_bdevs_operational": 4, 00:15:55.909 "base_bdevs_list": [ 00:15:55.909 { 00:15:55.909 "name": "BaseBdev1", 00:15:55.909 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:55.909 "is_configured": true, 00:15:55.909 "data_offset": 2048, 00:15:55.909 "data_size": 63488 00:15:55.909 }, 00:15:55.909 { 00:15:55.909 "name": "BaseBdev2", 00:15:55.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.909 "is_configured": false, 00:15:55.909 "data_offset": 0, 00:15:55.909 "data_size": 0 00:15:55.909 }, 00:15:55.909 { 00:15:55.909 "name": "BaseBdev3", 00:15:55.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.909 "is_configured": false, 00:15:55.909 "data_offset": 0, 00:15:55.909 "data_size": 0 00:15:55.909 }, 00:15:55.909 { 00:15:55.909 "name": "BaseBdev4", 00:15:55.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.909 "is_configured": false, 00:15:55.909 "data_offset": 0, 00:15:55.909 "data_size": 0 00:15:55.909 } 00:15:55.909 ] 00:15:55.909 }' 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.909 09:29:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.169 [2024-12-12 09:29:30.163903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.169 [2024-12-12 09:29:30.164025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.169 [2024-12-12 09:29:30.171952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.169 [2024-12-12 09:29:30.173962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.169 [2024-12-12 09:29:30.174012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.169 [2024-12-12 09:29:30.174023] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.169 [2024-12-12 09:29:30.174034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.169 [2024-12-12 09:29:30.174040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.169 [2024-12-12 09:29:30.174048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.169 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.170 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.428 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.428 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.428 "name": "Existed_Raid", 00:15:56.428 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:56.428 "strip_size_kb": 64, 00:15:56.428 "state": "configuring", 00:15:56.428 "raid_level": "raid5f", 00:15:56.428 "superblock": true, 00:15:56.428 "num_base_bdevs": 4, 00:15:56.428 "num_base_bdevs_discovered": 1, 00:15:56.428 "num_base_bdevs_operational": 4, 00:15:56.428 "base_bdevs_list": [ 00:15:56.428 { 00:15:56.428 "name": "BaseBdev1", 00:15:56.428 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:56.428 "is_configured": true, 00:15:56.428 "data_offset": 2048, 00:15:56.428 "data_size": 63488 00:15:56.428 }, 00:15:56.428 { 00:15:56.428 "name": "BaseBdev2", 00:15:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.428 "is_configured": false, 00:15:56.428 "data_offset": 0, 00:15:56.428 "data_size": 0 00:15:56.428 }, 00:15:56.428 { 00:15:56.428 "name": "BaseBdev3", 00:15:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.428 "is_configured": false, 00:15:56.428 "data_offset": 0, 00:15:56.428 "data_size": 0 00:15:56.428 }, 00:15:56.428 { 00:15:56.428 "name": "BaseBdev4", 00:15:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.428 "is_configured": false, 00:15:56.428 "data_offset": 0, 00:15:56.428 "data_size": 0 00:15:56.428 } 00:15:56.428 ] 00:15:56.429 }' 00:15:56.429 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.429 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.688 [2024-12-12 09:29:30.666629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.688 BaseBdev2 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.688 [ 00:15:56.688 { 00:15:56.688 "name": "BaseBdev2", 00:15:56.688 "aliases": [ 00:15:56.688 "752f5421-da34-4832-aef9-c6f2178979c7" 00:15:56.688 ], 00:15:56.688 "product_name": "Malloc disk", 00:15:56.688 "block_size": 512, 00:15:56.688 "num_blocks": 65536, 00:15:56.688 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:56.688 "assigned_rate_limits": { 00:15:56.688 "rw_ios_per_sec": 0, 00:15:56.688 "rw_mbytes_per_sec": 0, 00:15:56.688 "r_mbytes_per_sec": 0, 00:15:56.688 "w_mbytes_per_sec": 0 00:15:56.688 }, 00:15:56.688 "claimed": true, 00:15:56.688 "claim_type": "exclusive_write", 00:15:56.688 "zoned": false, 00:15:56.688 "supported_io_types": { 00:15:56.688 "read": true, 00:15:56.688 "write": true, 00:15:56.688 "unmap": true, 00:15:56.688 "flush": true, 00:15:56.688 "reset": true, 00:15:56.688 "nvme_admin": false, 00:15:56.688 "nvme_io": false, 00:15:56.688 "nvme_io_md": false, 00:15:56.688 "write_zeroes": true, 00:15:56.688 "zcopy": true, 00:15:56.688 "get_zone_info": false, 00:15:56.688 "zone_management": false, 00:15:56.688 "zone_append": false, 00:15:56.688 "compare": false, 00:15:56.688 "compare_and_write": false, 00:15:56.688 "abort": true, 00:15:56.688 "seek_hole": false, 00:15:56.688 "seek_data": false, 00:15:56.688 "copy": true, 00:15:56.688 "nvme_iov_md": false 00:15:56.688 }, 00:15:56.688 "memory_domains": [ 00:15:56.688 { 00:15:56.688 "dma_device_id": "system", 00:15:56.688 "dma_device_type": 1 00:15:56.688 }, 00:15:56.688 { 00:15:56.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.688 "dma_device_type": 2 00:15:56.688 } 00:15:56.688 ], 00:15:56.688 "driver_specific": {} 00:15:56.688 } 00:15:56.688 ] 00:15:56.688 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.689 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.949 "name": "Existed_Raid", 00:15:56.949 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:56.949 "strip_size_kb": 64, 00:15:56.949 "state": "configuring", 00:15:56.949 "raid_level": "raid5f", 00:15:56.949 "superblock": true, 00:15:56.949 "num_base_bdevs": 4, 00:15:56.949 "num_base_bdevs_discovered": 2, 00:15:56.949 "num_base_bdevs_operational": 4, 00:15:56.949 "base_bdevs_list": [ 00:15:56.949 { 00:15:56.949 "name": "BaseBdev1", 00:15:56.949 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:56.949 "is_configured": true, 00:15:56.949 "data_offset": 2048, 00:15:56.949 "data_size": 63488 00:15:56.949 }, 00:15:56.949 { 00:15:56.949 "name": "BaseBdev2", 00:15:56.949 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:56.949 "is_configured": true, 00:15:56.949 "data_offset": 2048, 00:15:56.949 "data_size": 63488 00:15:56.949 }, 00:15:56.949 { 00:15:56.949 "name": "BaseBdev3", 00:15:56.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.949 "is_configured": false, 00:15:56.949 "data_offset": 0, 00:15:56.949 "data_size": 0 00:15:56.949 }, 00:15:56.949 { 00:15:56.949 "name": "BaseBdev4", 00:15:56.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.949 "is_configured": false, 00:15:56.949 "data_offset": 0, 00:15:56.949 "data_size": 0 00:15:56.949 } 00:15:56.949 ] 00:15:56.949 }' 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.949 09:29:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.209 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:57.209 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.209 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.469 [2024-12-12 09:29:31.240882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.469 BaseBdev3 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.469 [ 00:15:57.469 { 00:15:57.469 "name": "BaseBdev3", 00:15:57.469 "aliases": [ 00:15:57.469 "59aedb2e-02f0-43c4-ac05-e106524c2692" 00:15:57.469 ], 00:15:57.469 "product_name": "Malloc disk", 00:15:57.469 "block_size": 512, 00:15:57.469 "num_blocks": 65536, 00:15:57.469 "uuid": "59aedb2e-02f0-43c4-ac05-e106524c2692", 00:15:57.469 "assigned_rate_limits": { 00:15:57.469 "rw_ios_per_sec": 0, 00:15:57.469 "rw_mbytes_per_sec": 0, 00:15:57.469 "r_mbytes_per_sec": 0, 00:15:57.469 "w_mbytes_per_sec": 0 00:15:57.469 }, 00:15:57.469 "claimed": true, 00:15:57.469 "claim_type": "exclusive_write", 00:15:57.469 "zoned": false, 00:15:57.469 "supported_io_types": { 00:15:57.469 "read": true, 00:15:57.469 "write": true, 00:15:57.469 "unmap": true, 00:15:57.469 "flush": true, 00:15:57.469 "reset": true, 00:15:57.469 "nvme_admin": false, 00:15:57.469 "nvme_io": false, 00:15:57.469 "nvme_io_md": false, 00:15:57.469 "write_zeroes": true, 00:15:57.469 "zcopy": true, 00:15:57.469 "get_zone_info": false, 00:15:57.469 "zone_management": false, 00:15:57.469 "zone_append": false, 00:15:57.469 "compare": false, 00:15:57.469 "compare_and_write": false, 00:15:57.469 "abort": true, 00:15:57.469 "seek_hole": false, 00:15:57.469 "seek_data": false, 00:15:57.469 "copy": true, 00:15:57.469 "nvme_iov_md": false 00:15:57.469 }, 00:15:57.469 "memory_domains": [ 00:15:57.469 { 00:15:57.469 "dma_device_id": "system", 00:15:57.469 "dma_device_type": 1 00:15:57.469 }, 00:15:57.469 { 00:15:57.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.469 "dma_device_type": 2 00:15:57.469 } 00:15:57.469 ], 00:15:57.469 "driver_specific": {} 00:15:57.469 } 00:15:57.469 ] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.469 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.469 "name": "Existed_Raid", 00:15:57.469 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:57.469 "strip_size_kb": 64, 00:15:57.469 "state": "configuring", 00:15:57.469 "raid_level": "raid5f", 00:15:57.469 "superblock": true, 00:15:57.469 "num_base_bdevs": 4, 00:15:57.469 "num_base_bdevs_discovered": 3, 00:15:57.469 "num_base_bdevs_operational": 4, 00:15:57.469 "base_bdevs_list": [ 00:15:57.469 { 00:15:57.469 "name": "BaseBdev1", 00:15:57.469 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:57.469 "is_configured": true, 00:15:57.469 "data_offset": 2048, 00:15:57.469 "data_size": 63488 00:15:57.469 }, 00:15:57.469 { 00:15:57.470 "name": "BaseBdev2", 00:15:57.470 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:57.470 "is_configured": true, 00:15:57.470 "data_offset": 2048, 00:15:57.470 "data_size": 63488 00:15:57.470 }, 00:15:57.470 { 00:15:57.470 "name": "BaseBdev3", 00:15:57.470 "uuid": "59aedb2e-02f0-43c4-ac05-e106524c2692", 00:15:57.470 "is_configured": true, 00:15:57.470 "data_offset": 2048, 00:15:57.470 "data_size": 63488 00:15:57.470 }, 00:15:57.470 { 00:15:57.470 "name": "BaseBdev4", 00:15:57.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.470 "is_configured": false, 00:15:57.470 "data_offset": 0, 00:15:57.470 "data_size": 0 00:15:57.470 } 00:15:57.470 ] 00:15:57.470 }' 00:15:57.470 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.470 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 [2024-12-12 09:29:31.735280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.730 [2024-12-12 09:29:31.735670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.730 [2024-12-12 09:29:31.735699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:57.730 [2024-12-12 09:29:31.736012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.730 BaseBdev4 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.730 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 [2024-12-12 09:29:31.743065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.730 [2024-12-12 09:29:31.743089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:57.730 [2024-12-12 09:29:31.743332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.990 [ 00:15:57.990 { 00:15:57.990 "name": "BaseBdev4", 00:15:57.990 "aliases": [ 00:15:57.990 "c046fdc5-83f4-4721-97d7-14970a74df25" 00:15:57.990 ], 00:15:57.990 "product_name": "Malloc disk", 00:15:57.990 "block_size": 512, 00:15:57.990 "num_blocks": 65536, 00:15:57.990 "uuid": "c046fdc5-83f4-4721-97d7-14970a74df25", 00:15:57.990 "assigned_rate_limits": { 00:15:57.990 "rw_ios_per_sec": 0, 00:15:57.990 "rw_mbytes_per_sec": 0, 00:15:57.990 "r_mbytes_per_sec": 0, 00:15:57.990 "w_mbytes_per_sec": 0 00:15:57.990 }, 00:15:57.990 "claimed": true, 00:15:57.990 "claim_type": "exclusive_write", 00:15:57.990 "zoned": false, 00:15:57.990 "supported_io_types": { 00:15:57.990 "read": true, 00:15:57.990 "write": true, 00:15:57.990 "unmap": true, 00:15:57.990 "flush": true, 00:15:57.990 "reset": true, 00:15:57.990 "nvme_admin": false, 00:15:57.990 "nvme_io": false, 00:15:57.990 "nvme_io_md": false, 00:15:57.990 "write_zeroes": true, 00:15:57.990 "zcopy": true, 00:15:57.990 "get_zone_info": false, 00:15:57.990 "zone_management": false, 00:15:57.990 "zone_append": false, 00:15:57.990 "compare": false, 00:15:57.990 "compare_and_write": false, 00:15:57.990 "abort": true, 00:15:57.990 "seek_hole": false, 00:15:57.990 "seek_data": false, 00:15:57.990 "copy": true, 00:15:57.990 "nvme_iov_md": false 00:15:57.990 }, 00:15:57.990 "memory_domains": [ 00:15:57.990 { 00:15:57.990 "dma_device_id": "system", 00:15:57.990 "dma_device_type": 1 00:15:57.990 }, 00:15:57.990 { 00:15:57.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.990 "dma_device_type": 2 00:15:57.990 } 00:15:57.990 ], 00:15:57.990 "driver_specific": {} 00:15:57.990 } 00:15:57.990 ] 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.990 "name": "Existed_Raid", 00:15:57.990 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:57.990 "strip_size_kb": 64, 00:15:57.990 "state": "online", 00:15:57.990 "raid_level": "raid5f", 00:15:57.990 "superblock": true, 00:15:57.990 "num_base_bdevs": 4, 00:15:57.990 "num_base_bdevs_discovered": 4, 00:15:57.990 "num_base_bdevs_operational": 4, 00:15:57.990 "base_bdevs_list": [ 00:15:57.990 { 00:15:57.990 "name": "BaseBdev1", 00:15:57.990 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:57.990 "is_configured": true, 00:15:57.990 "data_offset": 2048, 00:15:57.990 "data_size": 63488 00:15:57.990 }, 00:15:57.990 { 00:15:57.990 "name": "BaseBdev2", 00:15:57.990 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:57.990 "is_configured": true, 00:15:57.990 "data_offset": 2048, 00:15:57.990 "data_size": 63488 00:15:57.990 }, 00:15:57.990 { 00:15:57.990 "name": "BaseBdev3", 00:15:57.990 "uuid": "59aedb2e-02f0-43c4-ac05-e106524c2692", 00:15:57.990 "is_configured": true, 00:15:57.990 "data_offset": 2048, 00:15:57.990 "data_size": 63488 00:15:57.990 }, 00:15:57.990 { 00:15:57.990 "name": "BaseBdev4", 00:15:57.990 "uuid": "c046fdc5-83f4-4721-97d7-14970a74df25", 00:15:57.990 "is_configured": true, 00:15:57.990 "data_offset": 2048, 00:15:57.990 "data_size": 63488 00:15:57.990 } 00:15:57.990 ] 00:15:57.990 }' 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.990 09:29:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.251 [2024-12-12 09:29:32.227536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.251 "name": "Existed_Raid", 00:15:58.251 "aliases": [ 00:15:58.251 "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8" 00:15:58.251 ], 00:15:58.251 "product_name": "Raid Volume", 00:15:58.251 "block_size": 512, 00:15:58.251 "num_blocks": 190464, 00:15:58.251 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:58.251 "assigned_rate_limits": { 00:15:58.251 "rw_ios_per_sec": 0, 00:15:58.251 "rw_mbytes_per_sec": 0, 00:15:58.251 "r_mbytes_per_sec": 0, 00:15:58.251 "w_mbytes_per_sec": 0 00:15:58.251 }, 00:15:58.251 "claimed": false, 00:15:58.251 "zoned": false, 00:15:58.251 "supported_io_types": { 00:15:58.251 "read": true, 00:15:58.251 "write": true, 00:15:58.251 "unmap": false, 00:15:58.251 "flush": false, 00:15:58.251 "reset": true, 00:15:58.251 "nvme_admin": false, 00:15:58.251 "nvme_io": false, 00:15:58.251 "nvme_io_md": false, 00:15:58.251 "write_zeroes": true, 00:15:58.251 "zcopy": false, 00:15:58.251 "get_zone_info": false, 00:15:58.251 "zone_management": false, 00:15:58.251 "zone_append": false, 00:15:58.251 "compare": false, 00:15:58.251 "compare_and_write": false, 00:15:58.251 "abort": false, 00:15:58.251 "seek_hole": false, 00:15:58.251 "seek_data": false, 00:15:58.251 "copy": false, 00:15:58.251 "nvme_iov_md": false 00:15:58.251 }, 00:15:58.251 "driver_specific": { 00:15:58.251 "raid": { 00:15:58.251 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:58.251 "strip_size_kb": 64, 00:15:58.251 "state": "online", 00:15:58.251 "raid_level": "raid5f", 00:15:58.251 "superblock": true, 00:15:58.251 "num_base_bdevs": 4, 00:15:58.251 "num_base_bdevs_discovered": 4, 00:15:58.251 "num_base_bdevs_operational": 4, 00:15:58.251 "base_bdevs_list": [ 00:15:58.251 { 00:15:58.251 "name": "BaseBdev1", 00:15:58.251 "uuid": "a3deae14-8242-42c3-98d7-5fee4ae49931", 00:15:58.251 "is_configured": true, 00:15:58.251 "data_offset": 2048, 00:15:58.251 "data_size": 63488 00:15:58.251 }, 00:15:58.251 { 00:15:58.251 "name": "BaseBdev2", 00:15:58.251 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:58.251 "is_configured": true, 00:15:58.251 "data_offset": 2048, 00:15:58.251 "data_size": 63488 00:15:58.251 }, 00:15:58.251 { 00:15:58.251 "name": "BaseBdev3", 00:15:58.251 "uuid": "59aedb2e-02f0-43c4-ac05-e106524c2692", 00:15:58.251 "is_configured": true, 00:15:58.251 "data_offset": 2048, 00:15:58.251 "data_size": 63488 00:15:58.251 }, 00:15:58.251 { 00:15:58.251 "name": "BaseBdev4", 00:15:58.251 "uuid": "c046fdc5-83f4-4721-97d7-14970a74df25", 00:15:58.251 "is_configured": true, 00:15:58.251 "data_offset": 2048, 00:15:58.251 "data_size": 63488 00:15:58.251 } 00:15:58.251 ] 00:15:58.251 } 00:15:58.251 } 00:15:58.251 }' 00:15:58.251 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:58.511 BaseBdev2 00:15:58.511 BaseBdev3 00:15:58.511 BaseBdev4' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.511 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.512 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 [2024-12-12 09:29:32.566816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.771 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.771 "name": "Existed_Raid", 00:15:58.771 "uuid": "d7ef98c9-6015-4ea1-ab00-3dd1b2cba7c8", 00:15:58.771 "strip_size_kb": 64, 00:15:58.771 "state": "online", 00:15:58.771 "raid_level": "raid5f", 00:15:58.771 "superblock": true, 00:15:58.771 "num_base_bdevs": 4, 00:15:58.771 "num_base_bdevs_discovered": 3, 00:15:58.771 "num_base_bdevs_operational": 3, 00:15:58.771 "base_bdevs_list": [ 00:15:58.771 { 00:15:58.771 "name": null, 00:15:58.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.772 "is_configured": false, 00:15:58.772 "data_offset": 0, 00:15:58.772 "data_size": 63488 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "name": "BaseBdev2", 00:15:58.772 "uuid": "752f5421-da34-4832-aef9-c6f2178979c7", 00:15:58.772 "is_configured": true, 00:15:58.772 "data_offset": 2048, 00:15:58.772 "data_size": 63488 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "name": "BaseBdev3", 00:15:58.772 "uuid": "59aedb2e-02f0-43c4-ac05-e106524c2692", 00:15:58.772 "is_configured": true, 00:15:58.772 "data_offset": 2048, 00:15:58.772 "data_size": 63488 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "name": "BaseBdev4", 00:15:58.772 "uuid": "c046fdc5-83f4-4721-97d7-14970a74df25", 00:15:58.772 "is_configured": true, 00:15:58.772 "data_offset": 2048, 00:15:58.772 "data_size": 63488 00:15:58.772 } 00:15:58.772 ] 00:15:58.772 }' 00:15:58.772 09:29:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.772 09:29:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 [2024-12-12 09:29:33.182644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.341 [2024-12-12 09:29:33.182887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.341 [2024-12-12 09:29:33.279688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 [2024-12-12 09:29:33.335597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.601 [2024-12-12 09:29:33.489158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:59.601 [2024-12-12 09:29:33.489286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.601 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.861 BaseBdev2 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.861 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.861 [ 00:15:59.861 { 00:15:59.861 "name": "BaseBdev2", 00:15:59.861 "aliases": [ 00:15:59.861 "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3" 00:15:59.861 ], 00:15:59.861 "product_name": "Malloc disk", 00:15:59.861 "block_size": 512, 00:15:59.861 "num_blocks": 65536, 00:15:59.861 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:15:59.861 "assigned_rate_limits": { 00:15:59.861 "rw_ios_per_sec": 0, 00:15:59.861 "rw_mbytes_per_sec": 0, 00:15:59.861 "r_mbytes_per_sec": 0, 00:15:59.862 "w_mbytes_per_sec": 0 00:15:59.862 }, 00:15:59.862 "claimed": false, 00:15:59.862 "zoned": false, 00:15:59.862 "supported_io_types": { 00:15:59.862 "read": true, 00:15:59.862 "write": true, 00:15:59.862 "unmap": true, 00:15:59.862 "flush": true, 00:15:59.862 "reset": true, 00:15:59.862 "nvme_admin": false, 00:15:59.862 "nvme_io": false, 00:15:59.862 "nvme_io_md": false, 00:15:59.862 "write_zeroes": true, 00:15:59.862 "zcopy": true, 00:15:59.862 "get_zone_info": false, 00:15:59.862 "zone_management": false, 00:15:59.862 "zone_append": false, 00:15:59.862 "compare": false, 00:15:59.862 "compare_and_write": false, 00:15:59.862 "abort": true, 00:15:59.862 "seek_hole": false, 00:15:59.862 "seek_data": false, 00:15:59.862 "copy": true, 00:15:59.862 "nvme_iov_md": false 00:15:59.862 }, 00:15:59.862 "memory_domains": [ 00:15:59.862 { 00:15:59.862 "dma_device_id": "system", 00:15:59.862 "dma_device_type": 1 00:15:59.862 }, 00:15:59.862 { 00:15:59.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.862 "dma_device_type": 2 00:15:59.862 } 00:15:59.862 ], 00:15:59.862 "driver_specific": {} 00:15:59.862 } 00:15:59.862 ] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 BaseBdev3 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 [ 00:15:59.862 { 00:15:59.862 "name": "BaseBdev3", 00:15:59.862 "aliases": [ 00:15:59.862 "ca219eca-db50-40c2-8866-062ba26ac081" 00:15:59.862 ], 00:15:59.862 "product_name": "Malloc disk", 00:15:59.862 "block_size": 512, 00:15:59.862 "num_blocks": 65536, 00:15:59.862 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:15:59.862 "assigned_rate_limits": { 00:15:59.862 "rw_ios_per_sec": 0, 00:15:59.862 "rw_mbytes_per_sec": 0, 00:15:59.862 "r_mbytes_per_sec": 0, 00:15:59.862 "w_mbytes_per_sec": 0 00:15:59.862 }, 00:15:59.862 "claimed": false, 00:15:59.862 "zoned": false, 00:15:59.862 "supported_io_types": { 00:15:59.862 "read": true, 00:15:59.862 "write": true, 00:15:59.862 "unmap": true, 00:15:59.862 "flush": true, 00:15:59.862 "reset": true, 00:15:59.862 "nvme_admin": false, 00:15:59.862 "nvme_io": false, 00:15:59.862 "nvme_io_md": false, 00:15:59.862 "write_zeroes": true, 00:15:59.862 "zcopy": true, 00:15:59.862 "get_zone_info": false, 00:15:59.862 "zone_management": false, 00:15:59.862 "zone_append": false, 00:15:59.862 "compare": false, 00:15:59.862 "compare_and_write": false, 00:15:59.862 "abort": true, 00:15:59.862 "seek_hole": false, 00:15:59.862 "seek_data": false, 00:15:59.862 "copy": true, 00:15:59.862 "nvme_iov_md": false 00:15:59.862 }, 00:15:59.862 "memory_domains": [ 00:15:59.862 { 00:15:59.862 "dma_device_id": "system", 00:15:59.862 "dma_device_type": 1 00:15:59.862 }, 00:15:59.862 { 00:15:59.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.862 "dma_device_type": 2 00:15:59.862 } 00:15:59.862 ], 00:15:59.862 "driver_specific": {} 00:15:59.862 } 00:15:59.862 ] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 BaseBdev4 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 [ 00:15:59.862 { 00:15:59.862 "name": "BaseBdev4", 00:15:59.862 "aliases": [ 00:15:59.862 "2cbacc4e-4afe-4733-a708-b01c9009c552" 00:15:59.862 ], 00:15:59.862 "product_name": "Malloc disk", 00:15:59.862 "block_size": 512, 00:15:59.862 "num_blocks": 65536, 00:15:59.862 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:15:59.862 "assigned_rate_limits": { 00:15:59.862 "rw_ios_per_sec": 0, 00:15:59.862 "rw_mbytes_per_sec": 0, 00:15:59.862 "r_mbytes_per_sec": 0, 00:15:59.862 "w_mbytes_per_sec": 0 00:15:59.862 }, 00:15:59.862 "claimed": false, 00:15:59.862 "zoned": false, 00:15:59.862 "supported_io_types": { 00:15:59.862 "read": true, 00:15:59.862 "write": true, 00:15:59.862 "unmap": true, 00:15:59.862 "flush": true, 00:15:59.862 "reset": true, 00:15:59.862 "nvme_admin": false, 00:15:59.862 "nvme_io": false, 00:15:59.862 "nvme_io_md": false, 00:15:59.862 "write_zeroes": true, 00:15:59.862 "zcopy": true, 00:15:59.862 "get_zone_info": false, 00:15:59.862 "zone_management": false, 00:15:59.862 "zone_append": false, 00:15:59.862 "compare": false, 00:15:59.862 "compare_and_write": false, 00:15:59.862 "abort": true, 00:15:59.862 "seek_hole": false, 00:15:59.862 "seek_data": false, 00:15:59.862 "copy": true, 00:15:59.862 "nvme_iov_md": false 00:15:59.862 }, 00:15:59.862 "memory_domains": [ 00:15:59.862 { 00:15:59.862 "dma_device_id": "system", 00:15:59.862 "dma_device_type": 1 00:15:59.862 }, 00:15:59.862 { 00:15:59.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.122 "dma_device_type": 2 00:16:00.122 } 00:16:00.122 ], 00:16:00.122 "driver_specific": {} 00:16:00.122 } 00:16:00.122 ] 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.122 [2024-12-12 09:29:33.891983] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.122 [2024-12-12 09:29:33.892117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.122 [2024-12-12 09:29:33.892159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.122 [2024-12-12 09:29:33.894303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.122 [2024-12-12 09:29:33.894397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.122 "name": "Existed_Raid", 00:16:00.122 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:00.122 "strip_size_kb": 64, 00:16:00.122 "state": "configuring", 00:16:00.122 "raid_level": "raid5f", 00:16:00.122 "superblock": true, 00:16:00.122 "num_base_bdevs": 4, 00:16:00.122 "num_base_bdevs_discovered": 3, 00:16:00.122 "num_base_bdevs_operational": 4, 00:16:00.122 "base_bdevs_list": [ 00:16:00.122 { 00:16:00.122 "name": "BaseBdev1", 00:16:00.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.122 "is_configured": false, 00:16:00.122 "data_offset": 0, 00:16:00.122 "data_size": 0 00:16:00.122 }, 00:16:00.122 { 00:16:00.122 "name": "BaseBdev2", 00:16:00.122 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:00.122 "is_configured": true, 00:16:00.122 "data_offset": 2048, 00:16:00.122 "data_size": 63488 00:16:00.122 }, 00:16:00.122 { 00:16:00.122 "name": "BaseBdev3", 00:16:00.122 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:00.122 "is_configured": true, 00:16:00.122 "data_offset": 2048, 00:16:00.122 "data_size": 63488 00:16:00.122 }, 00:16:00.122 { 00:16:00.122 "name": "BaseBdev4", 00:16:00.122 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:00.122 "is_configured": true, 00:16:00.122 "data_offset": 2048, 00:16:00.122 "data_size": 63488 00:16:00.122 } 00:16:00.122 ] 00:16:00.122 }' 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.122 09:29:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 [2024-12-12 09:29:34.347511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.642 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.642 "name": "Existed_Raid", 00:16:00.642 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:00.642 "strip_size_kb": 64, 00:16:00.642 "state": "configuring", 00:16:00.642 "raid_level": "raid5f", 00:16:00.642 "superblock": true, 00:16:00.642 "num_base_bdevs": 4, 00:16:00.642 "num_base_bdevs_discovered": 2, 00:16:00.642 "num_base_bdevs_operational": 4, 00:16:00.642 "base_bdevs_list": [ 00:16:00.642 { 00:16:00.642 "name": "BaseBdev1", 00:16:00.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.642 "is_configured": false, 00:16:00.642 "data_offset": 0, 00:16:00.642 "data_size": 0 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": null, 00:16:00.642 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:00.642 "is_configured": false, 00:16:00.642 "data_offset": 0, 00:16:00.642 "data_size": 63488 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": "BaseBdev3", 00:16:00.642 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:00.642 "is_configured": true, 00:16:00.642 "data_offset": 2048, 00:16:00.642 "data_size": 63488 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": "BaseBdev4", 00:16:00.642 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:00.642 "is_configured": true, 00:16:00.642 "data_offset": 2048, 00:16:00.642 "data_size": 63488 00:16:00.642 } 00:16:00.642 ] 00:16:00.642 }' 00:16:00.642 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.642 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 [2024-12-12 09:29:34.870232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.902 BaseBdev1 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 [ 00:16:00.902 { 00:16:00.902 "name": "BaseBdev1", 00:16:00.902 "aliases": [ 00:16:00.902 "1c0dda76-6f48-415c-9255-09a430decaf4" 00:16:00.902 ], 00:16:00.902 "product_name": "Malloc disk", 00:16:00.902 "block_size": 512, 00:16:00.902 "num_blocks": 65536, 00:16:00.902 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:00.902 "assigned_rate_limits": { 00:16:00.902 "rw_ios_per_sec": 0, 00:16:00.902 "rw_mbytes_per_sec": 0, 00:16:00.902 "r_mbytes_per_sec": 0, 00:16:00.902 "w_mbytes_per_sec": 0 00:16:00.902 }, 00:16:00.902 "claimed": true, 00:16:00.902 "claim_type": "exclusive_write", 00:16:00.902 "zoned": false, 00:16:00.902 "supported_io_types": { 00:16:00.902 "read": true, 00:16:00.902 "write": true, 00:16:00.902 "unmap": true, 00:16:00.902 "flush": true, 00:16:00.902 "reset": true, 00:16:00.902 "nvme_admin": false, 00:16:00.902 "nvme_io": false, 00:16:00.902 "nvme_io_md": false, 00:16:00.902 "write_zeroes": true, 00:16:00.902 "zcopy": true, 00:16:00.902 "get_zone_info": false, 00:16:00.902 "zone_management": false, 00:16:00.902 "zone_append": false, 00:16:00.902 "compare": false, 00:16:00.902 "compare_and_write": false, 00:16:00.902 "abort": true, 00:16:00.902 "seek_hole": false, 00:16:00.902 "seek_data": false, 00:16:00.902 "copy": true, 00:16:00.902 "nvme_iov_md": false 00:16:00.902 }, 00:16:00.902 "memory_domains": [ 00:16:00.902 { 00:16:00.902 "dma_device_id": "system", 00:16:00.902 "dma_device_type": 1 00:16:00.902 }, 00:16:00.902 { 00:16:00.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.902 "dma_device_type": 2 00:16:00.902 } 00:16:00.902 ], 00:16:00.902 "driver_specific": {} 00:16:00.902 } 00:16:00.902 ] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.902 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.903 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.903 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.903 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.225 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.225 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.225 "name": "Existed_Raid", 00:16:01.225 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:01.226 "strip_size_kb": 64, 00:16:01.226 "state": "configuring", 00:16:01.226 "raid_level": "raid5f", 00:16:01.226 "superblock": true, 00:16:01.226 "num_base_bdevs": 4, 00:16:01.226 "num_base_bdevs_discovered": 3, 00:16:01.226 "num_base_bdevs_operational": 4, 00:16:01.226 "base_bdevs_list": [ 00:16:01.226 { 00:16:01.226 "name": "BaseBdev1", 00:16:01.226 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:01.226 "is_configured": true, 00:16:01.226 "data_offset": 2048, 00:16:01.226 "data_size": 63488 00:16:01.226 }, 00:16:01.226 { 00:16:01.226 "name": null, 00:16:01.226 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:01.226 "is_configured": false, 00:16:01.226 "data_offset": 0, 00:16:01.226 "data_size": 63488 00:16:01.226 }, 00:16:01.226 { 00:16:01.226 "name": "BaseBdev3", 00:16:01.226 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:01.226 "is_configured": true, 00:16:01.226 "data_offset": 2048, 00:16:01.226 "data_size": 63488 00:16:01.226 }, 00:16:01.226 { 00:16:01.226 "name": "BaseBdev4", 00:16:01.226 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:01.226 "is_configured": true, 00:16:01.226 "data_offset": 2048, 00:16:01.226 "data_size": 63488 00:16:01.226 } 00:16:01.226 ] 00:16:01.226 }' 00:16:01.226 09:29:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.226 09:29:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.484 [2024-12-12 09:29:35.429365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.484 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.484 "name": "Existed_Raid", 00:16:01.484 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:01.484 "strip_size_kb": 64, 00:16:01.484 "state": "configuring", 00:16:01.484 "raid_level": "raid5f", 00:16:01.484 "superblock": true, 00:16:01.484 "num_base_bdevs": 4, 00:16:01.484 "num_base_bdevs_discovered": 2, 00:16:01.484 "num_base_bdevs_operational": 4, 00:16:01.484 "base_bdevs_list": [ 00:16:01.484 { 00:16:01.484 "name": "BaseBdev1", 00:16:01.484 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:01.484 "is_configured": true, 00:16:01.484 "data_offset": 2048, 00:16:01.484 "data_size": 63488 00:16:01.484 }, 00:16:01.484 { 00:16:01.484 "name": null, 00:16:01.484 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:01.484 "is_configured": false, 00:16:01.484 "data_offset": 0, 00:16:01.484 "data_size": 63488 00:16:01.484 }, 00:16:01.484 { 00:16:01.484 "name": null, 00:16:01.485 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:01.485 "is_configured": false, 00:16:01.485 "data_offset": 0, 00:16:01.485 "data_size": 63488 00:16:01.485 }, 00:16:01.485 { 00:16:01.485 "name": "BaseBdev4", 00:16:01.485 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:01.485 "is_configured": true, 00:16:01.485 "data_offset": 2048, 00:16:01.485 "data_size": 63488 00:16:01.485 } 00:16:01.485 ] 00:16:01.485 }' 00:16:01.485 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.485 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.054 [2024-12-12 09:29:35.908526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.054 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.055 "name": "Existed_Raid", 00:16:02.055 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:02.055 "strip_size_kb": 64, 00:16:02.055 "state": "configuring", 00:16:02.055 "raid_level": "raid5f", 00:16:02.055 "superblock": true, 00:16:02.055 "num_base_bdevs": 4, 00:16:02.055 "num_base_bdevs_discovered": 3, 00:16:02.055 "num_base_bdevs_operational": 4, 00:16:02.055 "base_bdevs_list": [ 00:16:02.055 { 00:16:02.055 "name": "BaseBdev1", 00:16:02.055 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:02.055 "is_configured": true, 00:16:02.055 "data_offset": 2048, 00:16:02.055 "data_size": 63488 00:16:02.055 }, 00:16:02.055 { 00:16:02.055 "name": null, 00:16:02.055 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:02.055 "is_configured": false, 00:16:02.055 "data_offset": 0, 00:16:02.055 "data_size": 63488 00:16:02.055 }, 00:16:02.055 { 00:16:02.055 "name": "BaseBdev3", 00:16:02.055 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:02.055 "is_configured": true, 00:16:02.055 "data_offset": 2048, 00:16:02.055 "data_size": 63488 00:16:02.055 }, 00:16:02.055 { 00:16:02.055 "name": "BaseBdev4", 00:16:02.055 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:02.055 "is_configured": true, 00:16:02.055 "data_offset": 2048, 00:16:02.055 "data_size": 63488 00:16:02.055 } 00:16:02.055 ] 00:16:02.055 }' 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.055 09:29:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.315 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 [2024-12-12 09:29:36.367817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.575 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.575 "name": "Existed_Raid", 00:16:02.575 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:02.575 "strip_size_kb": 64, 00:16:02.575 "state": "configuring", 00:16:02.575 "raid_level": "raid5f", 00:16:02.575 "superblock": true, 00:16:02.575 "num_base_bdevs": 4, 00:16:02.575 "num_base_bdevs_discovered": 2, 00:16:02.575 "num_base_bdevs_operational": 4, 00:16:02.575 "base_bdevs_list": [ 00:16:02.575 { 00:16:02.575 "name": null, 00:16:02.575 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:02.575 "is_configured": false, 00:16:02.575 "data_offset": 0, 00:16:02.575 "data_size": 63488 00:16:02.575 }, 00:16:02.575 { 00:16:02.575 "name": null, 00:16:02.575 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:02.575 "is_configured": false, 00:16:02.575 "data_offset": 0, 00:16:02.575 "data_size": 63488 00:16:02.575 }, 00:16:02.575 { 00:16:02.575 "name": "BaseBdev3", 00:16:02.575 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:02.575 "is_configured": true, 00:16:02.575 "data_offset": 2048, 00:16:02.575 "data_size": 63488 00:16:02.575 }, 00:16:02.575 { 00:16:02.575 "name": "BaseBdev4", 00:16:02.576 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:02.576 "is_configured": true, 00:16:02.576 "data_offset": 2048, 00:16:02.576 "data_size": 63488 00:16:02.576 } 00:16:02.576 ] 00:16:02.576 }' 00:16:02.576 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.576 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.145 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:03.145 09:29:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.145 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.145 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.145 09:29:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.145 [2024-12-12 09:29:37.024978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.145 "name": "Existed_Raid", 00:16:03.145 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:03.145 "strip_size_kb": 64, 00:16:03.145 "state": "configuring", 00:16:03.145 "raid_level": "raid5f", 00:16:03.145 "superblock": true, 00:16:03.145 "num_base_bdevs": 4, 00:16:03.145 "num_base_bdevs_discovered": 3, 00:16:03.145 "num_base_bdevs_operational": 4, 00:16:03.145 "base_bdevs_list": [ 00:16:03.145 { 00:16:03.145 "name": null, 00:16:03.145 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:03.145 "is_configured": false, 00:16:03.145 "data_offset": 0, 00:16:03.145 "data_size": 63488 00:16:03.145 }, 00:16:03.145 { 00:16:03.145 "name": "BaseBdev2", 00:16:03.145 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:03.145 "is_configured": true, 00:16:03.145 "data_offset": 2048, 00:16:03.145 "data_size": 63488 00:16:03.145 }, 00:16:03.145 { 00:16:03.145 "name": "BaseBdev3", 00:16:03.145 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:03.145 "is_configured": true, 00:16:03.145 "data_offset": 2048, 00:16:03.145 "data_size": 63488 00:16:03.145 }, 00:16:03.145 { 00:16:03.145 "name": "BaseBdev4", 00:16:03.145 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:03.145 "is_configured": true, 00:16:03.145 "data_offset": 2048, 00:16:03.145 "data_size": 63488 00:16:03.145 } 00:16:03.145 ] 00:16:03.145 }' 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.145 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.405 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.405 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.405 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.405 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1c0dda76-6f48-415c-9255-09a430decaf4 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 [2024-12-12 09:29:37.548806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:03.665 [2024-12-12 09:29:37.549061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:03.665 [2024-12-12 09:29:37.549077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:03.665 NewBaseBdev 00:16:03.665 [2024-12-12 09:29:37.549350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 [2024-12-12 09:29:37.556284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:03.665 [2024-12-12 09:29:37.556358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:03.665 [2024-12-12 09:29:37.556681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 [ 00:16:03.665 { 00:16:03.665 "name": "NewBaseBdev", 00:16:03.665 "aliases": [ 00:16:03.665 "1c0dda76-6f48-415c-9255-09a430decaf4" 00:16:03.665 ], 00:16:03.665 "product_name": "Malloc disk", 00:16:03.665 "block_size": 512, 00:16:03.665 "num_blocks": 65536, 00:16:03.665 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:03.665 "assigned_rate_limits": { 00:16:03.665 "rw_ios_per_sec": 0, 00:16:03.665 "rw_mbytes_per_sec": 0, 00:16:03.665 "r_mbytes_per_sec": 0, 00:16:03.665 "w_mbytes_per_sec": 0 00:16:03.665 }, 00:16:03.665 "claimed": true, 00:16:03.665 "claim_type": "exclusive_write", 00:16:03.665 "zoned": false, 00:16:03.665 "supported_io_types": { 00:16:03.665 "read": true, 00:16:03.665 "write": true, 00:16:03.665 "unmap": true, 00:16:03.665 "flush": true, 00:16:03.665 "reset": true, 00:16:03.665 "nvme_admin": false, 00:16:03.665 "nvme_io": false, 00:16:03.665 "nvme_io_md": false, 00:16:03.665 "write_zeroes": true, 00:16:03.665 "zcopy": true, 00:16:03.665 "get_zone_info": false, 00:16:03.665 "zone_management": false, 00:16:03.665 "zone_append": false, 00:16:03.665 "compare": false, 00:16:03.665 "compare_and_write": false, 00:16:03.665 "abort": true, 00:16:03.665 "seek_hole": false, 00:16:03.665 "seek_data": false, 00:16:03.665 "copy": true, 00:16:03.665 "nvme_iov_md": false 00:16:03.665 }, 00:16:03.665 "memory_domains": [ 00:16:03.665 { 00:16:03.665 "dma_device_id": "system", 00:16:03.665 "dma_device_type": 1 00:16:03.665 }, 00:16:03.665 { 00:16:03.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.665 "dma_device_type": 2 00:16:03.665 } 00:16:03.665 ], 00:16:03.665 "driver_specific": {} 00:16:03.665 } 00:16:03.665 ] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.665 "name": "Existed_Raid", 00:16:03.665 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:03.665 "strip_size_kb": 64, 00:16:03.665 "state": "online", 00:16:03.665 "raid_level": "raid5f", 00:16:03.665 "superblock": true, 00:16:03.665 "num_base_bdevs": 4, 00:16:03.665 "num_base_bdevs_discovered": 4, 00:16:03.665 "num_base_bdevs_operational": 4, 00:16:03.665 "base_bdevs_list": [ 00:16:03.665 { 00:16:03.665 "name": "NewBaseBdev", 00:16:03.665 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 2048, 00:16:03.665 "data_size": 63488 00:16:03.665 }, 00:16:03.665 { 00:16:03.665 "name": "BaseBdev2", 00:16:03.665 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 2048, 00:16:03.665 "data_size": 63488 00:16:03.665 }, 00:16:03.665 { 00:16:03.665 "name": "BaseBdev3", 00:16:03.665 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 2048, 00:16:03.665 "data_size": 63488 00:16:03.665 }, 00:16:03.665 { 00:16:03.665 "name": "BaseBdev4", 00:16:03.665 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 2048, 00:16:03.665 "data_size": 63488 00:16:03.665 } 00:16:03.665 ] 00:16:03.665 }' 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.665 09:29:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.236 09:29:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.236 [2024-12-12 09:29:38.005231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.236 "name": "Existed_Raid", 00:16:04.236 "aliases": [ 00:16:04.236 "4e65185e-59f5-4100-9a51-2c7fccfe6cf6" 00:16:04.236 ], 00:16:04.236 "product_name": "Raid Volume", 00:16:04.236 "block_size": 512, 00:16:04.236 "num_blocks": 190464, 00:16:04.236 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:04.236 "assigned_rate_limits": { 00:16:04.236 "rw_ios_per_sec": 0, 00:16:04.236 "rw_mbytes_per_sec": 0, 00:16:04.236 "r_mbytes_per_sec": 0, 00:16:04.236 "w_mbytes_per_sec": 0 00:16:04.236 }, 00:16:04.236 "claimed": false, 00:16:04.236 "zoned": false, 00:16:04.236 "supported_io_types": { 00:16:04.236 "read": true, 00:16:04.236 "write": true, 00:16:04.236 "unmap": false, 00:16:04.236 "flush": false, 00:16:04.236 "reset": true, 00:16:04.236 "nvme_admin": false, 00:16:04.236 "nvme_io": false, 00:16:04.236 "nvme_io_md": false, 00:16:04.236 "write_zeroes": true, 00:16:04.236 "zcopy": false, 00:16:04.236 "get_zone_info": false, 00:16:04.236 "zone_management": false, 00:16:04.236 "zone_append": false, 00:16:04.236 "compare": false, 00:16:04.236 "compare_and_write": false, 00:16:04.236 "abort": false, 00:16:04.236 "seek_hole": false, 00:16:04.236 "seek_data": false, 00:16:04.236 "copy": false, 00:16:04.236 "nvme_iov_md": false 00:16:04.236 }, 00:16:04.236 "driver_specific": { 00:16:04.236 "raid": { 00:16:04.236 "uuid": "4e65185e-59f5-4100-9a51-2c7fccfe6cf6", 00:16:04.236 "strip_size_kb": 64, 00:16:04.236 "state": "online", 00:16:04.236 "raid_level": "raid5f", 00:16:04.236 "superblock": true, 00:16:04.236 "num_base_bdevs": 4, 00:16:04.236 "num_base_bdevs_discovered": 4, 00:16:04.236 "num_base_bdevs_operational": 4, 00:16:04.236 "base_bdevs_list": [ 00:16:04.236 { 00:16:04.236 "name": "NewBaseBdev", 00:16:04.236 "uuid": "1c0dda76-6f48-415c-9255-09a430decaf4", 00:16:04.236 "is_configured": true, 00:16:04.236 "data_offset": 2048, 00:16:04.236 "data_size": 63488 00:16:04.236 }, 00:16:04.236 { 00:16:04.236 "name": "BaseBdev2", 00:16:04.236 "uuid": "1b4172b7-960b-41cc-a1d6-7c72f02ed6a3", 00:16:04.236 "is_configured": true, 00:16:04.236 "data_offset": 2048, 00:16:04.236 "data_size": 63488 00:16:04.236 }, 00:16:04.236 { 00:16:04.236 "name": "BaseBdev3", 00:16:04.236 "uuid": "ca219eca-db50-40c2-8866-062ba26ac081", 00:16:04.236 "is_configured": true, 00:16:04.236 "data_offset": 2048, 00:16:04.236 "data_size": 63488 00:16:04.236 }, 00:16:04.236 { 00:16:04.236 "name": "BaseBdev4", 00:16:04.236 "uuid": "2cbacc4e-4afe-4733-a708-b01c9009c552", 00:16:04.236 "is_configured": true, 00:16:04.236 "data_offset": 2048, 00:16:04.236 "data_size": 63488 00:16:04.236 } 00:16:04.236 ] 00:16:04.236 } 00:16:04.236 } 00:16:04.236 }' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:04.236 BaseBdev2 00:16:04.236 BaseBdev3 00:16:04.236 BaseBdev4' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.236 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.237 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.497 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.498 [2024-12-12 09:29:38.372378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.498 [2024-12-12 09:29:38.372451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.498 [2024-12-12 09:29:38.372521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.498 [2024-12-12 09:29:38.372844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.498 [2024-12-12 09:29:38.372854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84577 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84577 ']' 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84577 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84577 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84577' 00:16:04.498 killing process with pid 84577 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84577 00:16:04.498 [2024-12-12 09:29:38.423147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.498 09:29:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84577 00:16:05.068 [2024-12-12 09:29:38.836540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.006 09:29:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:06.006 ************************************ 00:16:06.006 END TEST raid5f_state_function_test_sb 00:16:06.006 ************************************ 00:16:06.006 00:16:06.006 real 0m11.708s 00:16:06.006 user 0m18.275s 00:16:06.006 sys 0m2.395s 00:16:06.006 09:29:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.006 09:29:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.265 09:29:40 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:06.265 09:29:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.265 09:29:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.265 09:29:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.265 ************************************ 00:16:06.265 START TEST raid5f_superblock_test 00:16:06.265 ************************************ 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85252 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85252 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85252 ']' 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.266 09:29:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.266 [2024-12-12 09:29:40.204752] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:16:06.266 [2024-12-12 09:29:40.204873] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85252 ] 00:16:06.526 [2024-12-12 09:29:40.382261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.526 [2024-12-12 09:29:40.511554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.786 [2024-12-12 09:29:40.741489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.786 [2024-12-12 09:29:40.741632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.046 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 malloc1 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 [2024-12-12 09:29:41.085943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.307 [2024-12-12 09:29:41.086110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.307 [2024-12-12 09:29:41.086150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.307 [2024-12-12 09:29:41.086178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.307 [2024-12-12 09:29:41.088649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.307 [2024-12-12 09:29:41.088724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.307 pt1 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 malloc2 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 [2024-12-12 09:29:41.148743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.307 [2024-12-12 09:29:41.148851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.307 [2024-12-12 09:29:41.148908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.307 [2024-12-12 09:29:41.148940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.307 [2024-12-12 09:29:41.151298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.307 [2024-12-12 09:29:41.151385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.307 pt2 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 malloc3 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 [2024-12-12 09:29:41.239854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:07.307 [2024-12-12 09:29:41.239955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.307 [2024-12-12 09:29:41.240023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:07.307 [2024-12-12 09:29:41.240067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.307 [2024-12-12 09:29:41.242365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.307 [2024-12-12 09:29:41.242431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:07.307 pt3 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 malloc4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 [2024-12-12 09:29:41.304196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.307 [2024-12-12 09:29:41.304251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.307 [2024-12-12 09:29:41.304272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:07.307 [2024-12-12 09:29:41.304282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.307 [2024-12-12 09:29:41.306616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.307 [2024-12-12 09:29:41.306649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.307 pt4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 [2024-12-12 09:29:41.316217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.307 [2024-12-12 09:29:41.318255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.307 [2024-12-12 09:29:41.318341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.307 [2024-12-12 09:29:41.318388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.307 [2024-12-12 09:29:41.318578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:07.307 [2024-12-12 09:29:41.318605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:07.307 [2024-12-12 09:29:41.318839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.307 [2024-12-12 09:29:41.325756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:07.307 [2024-12-12 09:29:41.325781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:07.307 [2024-12-12 09:29:41.325951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.307 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.568 "name": "raid_bdev1", 00:16:07.568 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:07.568 "strip_size_kb": 64, 00:16:07.568 "state": "online", 00:16:07.568 "raid_level": "raid5f", 00:16:07.568 "superblock": true, 00:16:07.568 "num_base_bdevs": 4, 00:16:07.568 "num_base_bdevs_discovered": 4, 00:16:07.568 "num_base_bdevs_operational": 4, 00:16:07.568 "base_bdevs_list": [ 00:16:07.568 { 00:16:07.568 "name": "pt1", 00:16:07.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.568 "is_configured": true, 00:16:07.568 "data_offset": 2048, 00:16:07.568 "data_size": 63488 00:16:07.568 }, 00:16:07.568 { 00:16:07.568 "name": "pt2", 00:16:07.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.568 "is_configured": true, 00:16:07.568 "data_offset": 2048, 00:16:07.568 "data_size": 63488 00:16:07.568 }, 00:16:07.568 { 00:16:07.568 "name": "pt3", 00:16:07.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.568 "is_configured": true, 00:16:07.568 "data_offset": 2048, 00:16:07.568 "data_size": 63488 00:16:07.568 }, 00:16:07.568 { 00:16:07.568 "name": "pt4", 00:16:07.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.568 "is_configured": true, 00:16:07.568 "data_offset": 2048, 00:16:07.568 "data_size": 63488 00:16:07.568 } 00:16:07.568 ] 00:16:07.568 }' 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.568 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.828 [2024-12-12 09:29:41.790641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.828 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.828 "name": "raid_bdev1", 00:16:07.828 "aliases": [ 00:16:07.828 "4df67c5c-b476-431c-91a9-17238b5fd5a0" 00:16:07.828 ], 00:16:07.828 "product_name": "Raid Volume", 00:16:07.828 "block_size": 512, 00:16:07.828 "num_blocks": 190464, 00:16:07.828 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:07.828 "assigned_rate_limits": { 00:16:07.828 "rw_ios_per_sec": 0, 00:16:07.828 "rw_mbytes_per_sec": 0, 00:16:07.828 "r_mbytes_per_sec": 0, 00:16:07.828 "w_mbytes_per_sec": 0 00:16:07.828 }, 00:16:07.828 "claimed": false, 00:16:07.828 "zoned": false, 00:16:07.828 "supported_io_types": { 00:16:07.828 "read": true, 00:16:07.828 "write": true, 00:16:07.828 "unmap": false, 00:16:07.828 "flush": false, 00:16:07.828 "reset": true, 00:16:07.828 "nvme_admin": false, 00:16:07.828 "nvme_io": false, 00:16:07.828 "nvme_io_md": false, 00:16:07.828 "write_zeroes": true, 00:16:07.828 "zcopy": false, 00:16:07.828 "get_zone_info": false, 00:16:07.828 "zone_management": false, 00:16:07.828 "zone_append": false, 00:16:07.828 "compare": false, 00:16:07.828 "compare_and_write": false, 00:16:07.828 "abort": false, 00:16:07.828 "seek_hole": false, 00:16:07.828 "seek_data": false, 00:16:07.828 "copy": false, 00:16:07.828 "nvme_iov_md": false 00:16:07.828 }, 00:16:07.828 "driver_specific": { 00:16:07.828 "raid": { 00:16:07.828 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:07.828 "strip_size_kb": 64, 00:16:07.828 "state": "online", 00:16:07.828 "raid_level": "raid5f", 00:16:07.828 "superblock": true, 00:16:07.828 "num_base_bdevs": 4, 00:16:07.828 "num_base_bdevs_discovered": 4, 00:16:07.828 "num_base_bdevs_operational": 4, 00:16:07.828 "base_bdevs_list": [ 00:16:07.828 { 00:16:07.828 "name": "pt1", 00:16:07.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.828 "is_configured": true, 00:16:07.828 "data_offset": 2048, 00:16:07.828 "data_size": 63488 00:16:07.828 }, 00:16:07.828 { 00:16:07.828 "name": "pt2", 00:16:07.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.828 "is_configured": true, 00:16:07.828 "data_offset": 2048, 00:16:07.828 "data_size": 63488 00:16:07.828 }, 00:16:07.828 { 00:16:07.828 "name": "pt3", 00:16:07.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.828 "is_configured": true, 00:16:07.828 "data_offset": 2048, 00:16:07.829 "data_size": 63488 00:16:07.829 }, 00:16:07.829 { 00:16:07.829 "name": "pt4", 00:16:07.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.829 "is_configured": true, 00:16:07.829 "data_offset": 2048, 00:16:07.829 "data_size": 63488 00:16:07.829 } 00:16:07.829 ] 00:16:07.829 } 00:16:07.829 } 00:16:07.829 }' 00:16:07.829 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.088 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:08.088 pt2 00:16:08.088 pt3 00:16:08.088 pt4' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.089 09:29:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:08.089 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 [2024-12-12 09:29:42.118043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4df67c5c-b476-431c-91a9-17238b5fd5a0 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4df67c5c-b476-431c-91a9-17238b5fd5a0 ']' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 [2024-12-12 09:29:42.145834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.350 [2024-12-12 09:29:42.145905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.350 [2024-12-12 09:29:42.146017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.350 [2024-12-12 09:29:42.146121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.350 [2024-12-12 09:29:42.146269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 [2024-12-12 09:29:42.313566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:08.350 [2024-12-12 09:29:42.315667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:08.350 [2024-12-12 09:29:42.315765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:08.350 [2024-12-12 09:29:42.315831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:08.350 [2024-12-12 09:29:42.315911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:08.350 [2024-12-12 09:29:42.315995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:08.350 [2024-12-12 09:29:42.316077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:08.350 [2024-12-12 09:29:42.316133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:08.350 [2024-12-12 09:29:42.316181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.350 [2024-12-12 09:29:42.316211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:08.350 request: 00:16:08.350 { 00:16:08.350 "name": "raid_bdev1", 00:16:08.350 "raid_level": "raid5f", 00:16:08.350 "base_bdevs": [ 00:16:08.350 "malloc1", 00:16:08.350 "malloc2", 00:16:08.350 "malloc3", 00:16:08.350 "malloc4" 00:16:08.350 ], 00:16:08.350 "strip_size_kb": 64, 00:16:08.350 "superblock": false, 00:16:08.350 "method": "bdev_raid_create", 00:16:08.350 "req_id": 1 00:16:08.350 } 00:16:08.350 Got JSON-RPC error response 00:16:08.350 response: 00:16:08.350 { 00:16:08.350 "code": -17, 00:16:08.350 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:08.350 } 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 [2024-12-12 09:29:42.381437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:08.610 [2024-12-12 09:29:42.381529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.610 [2024-12-12 09:29:42.381558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:08.610 [2024-12-12 09:29:42.381584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.610 [2024-12-12 09:29:42.384023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.610 [2024-12-12 09:29:42.384091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:08.610 [2024-12-12 09:29:42.384180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:08.610 [2024-12-12 09:29:42.384264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.610 pt1 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:08.610 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.611 "name": "raid_bdev1", 00:16:08.611 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:08.611 "strip_size_kb": 64, 00:16:08.611 "state": "configuring", 00:16:08.611 "raid_level": "raid5f", 00:16:08.611 "superblock": true, 00:16:08.611 "num_base_bdevs": 4, 00:16:08.611 "num_base_bdevs_discovered": 1, 00:16:08.611 "num_base_bdevs_operational": 4, 00:16:08.611 "base_bdevs_list": [ 00:16:08.611 { 00:16:08.611 "name": "pt1", 00:16:08.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.611 "is_configured": true, 00:16:08.611 "data_offset": 2048, 00:16:08.611 "data_size": 63488 00:16:08.611 }, 00:16:08.611 { 00:16:08.611 "name": null, 00:16:08.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.611 "is_configured": false, 00:16:08.611 "data_offset": 2048, 00:16:08.611 "data_size": 63488 00:16:08.611 }, 00:16:08.611 { 00:16:08.611 "name": null, 00:16:08.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.611 "is_configured": false, 00:16:08.611 "data_offset": 2048, 00:16:08.611 "data_size": 63488 00:16:08.611 }, 00:16:08.611 { 00:16:08.611 "name": null, 00:16:08.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.611 "is_configured": false, 00:16:08.611 "data_offset": 2048, 00:16:08.611 "data_size": 63488 00:16:08.611 } 00:16:08.611 ] 00:16:08.611 }' 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.611 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.871 [2024-12-12 09:29:42.840625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.871 [2024-12-12 09:29:42.840677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.871 [2024-12-12 09:29:42.840694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:08.871 [2024-12-12 09:29:42.840704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.871 [2024-12-12 09:29:42.841075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.871 [2024-12-12 09:29:42.841105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.871 [2024-12-12 09:29:42.841179] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.871 [2024-12-12 09:29:42.841201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.871 pt2 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.871 [2024-12-12 09:29:42.852634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.871 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.131 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.131 "name": "raid_bdev1", 00:16:09.131 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:09.131 "strip_size_kb": 64, 00:16:09.131 "state": "configuring", 00:16:09.131 "raid_level": "raid5f", 00:16:09.131 "superblock": true, 00:16:09.131 "num_base_bdevs": 4, 00:16:09.131 "num_base_bdevs_discovered": 1, 00:16:09.131 "num_base_bdevs_operational": 4, 00:16:09.131 "base_bdevs_list": [ 00:16:09.131 { 00:16:09.131 "name": "pt1", 00:16:09.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.131 "is_configured": true, 00:16:09.131 "data_offset": 2048, 00:16:09.131 "data_size": 63488 00:16:09.131 }, 00:16:09.131 { 00:16:09.131 "name": null, 00:16:09.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.131 "is_configured": false, 00:16:09.131 "data_offset": 0, 00:16:09.131 "data_size": 63488 00:16:09.131 }, 00:16:09.131 { 00:16:09.131 "name": null, 00:16:09.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.131 "is_configured": false, 00:16:09.131 "data_offset": 2048, 00:16:09.131 "data_size": 63488 00:16:09.131 }, 00:16:09.131 { 00:16:09.131 "name": null, 00:16:09.131 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.131 "is_configured": false, 00:16:09.131 "data_offset": 2048, 00:16:09.131 "data_size": 63488 00:16:09.131 } 00:16:09.131 ] 00:16:09.131 }' 00:16:09.131 09:29:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.131 09:29:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 [2024-12-12 09:29:43.299823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.391 [2024-12-12 09:29:43.299866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.391 [2024-12-12 09:29:43.299880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:09.391 [2024-12-12 09:29:43.299888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.391 [2024-12-12 09:29:43.300284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.391 [2024-12-12 09:29:43.300308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.391 [2024-12-12 09:29:43.300363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:09.391 [2024-12-12 09:29:43.300378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.391 pt2 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 [2024-12-12 09:29:43.311804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.391 [2024-12-12 09:29:43.311905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.391 [2024-12-12 09:29:43.311937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:09.391 [2024-12-12 09:29:43.311979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.391 [2024-12-12 09:29:43.312353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.391 [2024-12-12 09:29:43.312412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.391 [2024-12-12 09:29:43.312488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:09.391 [2024-12-12 09:29:43.312541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:09.391 pt3 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 [2024-12-12 09:29:43.323769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:09.391 [2024-12-12 09:29:43.323845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.391 [2024-12-12 09:29:43.323874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:09.391 [2024-12-12 09:29:43.323898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.391 [2024-12-12 09:29:43.324292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.391 [2024-12-12 09:29:43.324348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:09.391 [2024-12-12 09:29:43.324428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:09.391 [2024-12-12 09:29:43.324477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:09.391 [2024-12-12 09:29:43.324633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:09.391 [2024-12-12 09:29:43.324645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.391 [2024-12-12 09:29:43.324885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:09.391 [2024-12-12 09:29:43.331663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:09.391 pt4 00:16:09.391 [2024-12-12 09:29:43.331737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:09.391 [2024-12-12 09:29:43.331897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.391 "name": "raid_bdev1", 00:16:09.391 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:09.391 "strip_size_kb": 64, 00:16:09.391 "state": "online", 00:16:09.391 "raid_level": "raid5f", 00:16:09.391 "superblock": true, 00:16:09.391 "num_base_bdevs": 4, 00:16:09.391 "num_base_bdevs_discovered": 4, 00:16:09.391 "num_base_bdevs_operational": 4, 00:16:09.391 "base_bdevs_list": [ 00:16:09.391 { 00:16:09.391 "name": "pt1", 00:16:09.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.391 "is_configured": true, 00:16:09.391 "data_offset": 2048, 00:16:09.391 "data_size": 63488 00:16:09.391 }, 00:16:09.391 { 00:16:09.391 "name": "pt2", 00:16:09.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.391 "is_configured": true, 00:16:09.391 "data_offset": 2048, 00:16:09.391 "data_size": 63488 00:16:09.391 }, 00:16:09.391 { 00:16:09.391 "name": "pt3", 00:16:09.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.391 "is_configured": true, 00:16:09.391 "data_offset": 2048, 00:16:09.391 "data_size": 63488 00:16:09.391 }, 00:16:09.391 { 00:16:09.391 "name": "pt4", 00:16:09.391 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.391 "is_configured": true, 00:16:09.391 "data_offset": 2048, 00:16:09.391 "data_size": 63488 00:16:09.391 } 00:16:09.391 ] 00:16:09.391 }' 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.391 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.961 [2024-12-12 09:29:43.788357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.961 "name": "raid_bdev1", 00:16:09.961 "aliases": [ 00:16:09.961 "4df67c5c-b476-431c-91a9-17238b5fd5a0" 00:16:09.961 ], 00:16:09.961 "product_name": "Raid Volume", 00:16:09.961 "block_size": 512, 00:16:09.961 "num_blocks": 190464, 00:16:09.961 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:09.961 "assigned_rate_limits": { 00:16:09.961 "rw_ios_per_sec": 0, 00:16:09.961 "rw_mbytes_per_sec": 0, 00:16:09.961 "r_mbytes_per_sec": 0, 00:16:09.961 "w_mbytes_per_sec": 0 00:16:09.961 }, 00:16:09.961 "claimed": false, 00:16:09.961 "zoned": false, 00:16:09.961 "supported_io_types": { 00:16:09.961 "read": true, 00:16:09.961 "write": true, 00:16:09.961 "unmap": false, 00:16:09.961 "flush": false, 00:16:09.961 "reset": true, 00:16:09.961 "nvme_admin": false, 00:16:09.961 "nvme_io": false, 00:16:09.961 "nvme_io_md": false, 00:16:09.961 "write_zeroes": true, 00:16:09.961 "zcopy": false, 00:16:09.961 "get_zone_info": false, 00:16:09.961 "zone_management": false, 00:16:09.961 "zone_append": false, 00:16:09.961 "compare": false, 00:16:09.961 "compare_and_write": false, 00:16:09.961 "abort": false, 00:16:09.961 "seek_hole": false, 00:16:09.961 "seek_data": false, 00:16:09.961 "copy": false, 00:16:09.961 "nvme_iov_md": false 00:16:09.961 }, 00:16:09.961 "driver_specific": { 00:16:09.961 "raid": { 00:16:09.961 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:09.961 "strip_size_kb": 64, 00:16:09.961 "state": "online", 00:16:09.961 "raid_level": "raid5f", 00:16:09.961 "superblock": true, 00:16:09.961 "num_base_bdevs": 4, 00:16:09.961 "num_base_bdevs_discovered": 4, 00:16:09.961 "num_base_bdevs_operational": 4, 00:16:09.961 "base_bdevs_list": [ 00:16:09.961 { 00:16:09.961 "name": "pt1", 00:16:09.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.961 "is_configured": true, 00:16:09.961 "data_offset": 2048, 00:16:09.961 "data_size": 63488 00:16:09.961 }, 00:16:09.961 { 00:16:09.961 "name": "pt2", 00:16:09.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.961 "is_configured": true, 00:16:09.961 "data_offset": 2048, 00:16:09.961 "data_size": 63488 00:16:09.961 }, 00:16:09.961 { 00:16:09.961 "name": "pt3", 00:16:09.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.961 "is_configured": true, 00:16:09.961 "data_offset": 2048, 00:16:09.961 "data_size": 63488 00:16:09.961 }, 00:16:09.961 { 00:16:09.961 "name": "pt4", 00:16:09.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.961 "is_configured": true, 00:16:09.961 "data_offset": 2048, 00:16:09.961 "data_size": 63488 00:16:09.961 } 00:16:09.961 ] 00:16:09.961 } 00:16:09.961 } 00:16:09.961 }' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:09.961 pt2 00:16:09.961 pt3 00:16:09.961 pt4' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.961 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.221 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:10.221 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 09:29:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:10.221 [2024-12-12 09:29:44.135822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4df67c5c-b476-431c-91a9-17238b5fd5a0 '!=' 4df67c5c-b476-431c-91a9-17238b5fd5a0 ']' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 [2024-12-12 09:29:44.183626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.221 "name": "raid_bdev1", 00:16:10.221 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:10.221 "strip_size_kb": 64, 00:16:10.221 "state": "online", 00:16:10.221 "raid_level": "raid5f", 00:16:10.221 "superblock": true, 00:16:10.221 "num_base_bdevs": 4, 00:16:10.221 "num_base_bdevs_discovered": 3, 00:16:10.221 "num_base_bdevs_operational": 3, 00:16:10.221 "base_bdevs_list": [ 00:16:10.221 { 00:16:10.221 "name": null, 00:16:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.221 "is_configured": false, 00:16:10.221 "data_offset": 0, 00:16:10.221 "data_size": 63488 00:16:10.221 }, 00:16:10.221 { 00:16:10.221 "name": "pt2", 00:16:10.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 }, 00:16:10.221 { 00:16:10.221 "name": "pt3", 00:16:10.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 }, 00:16:10.221 { 00:16:10.221 "name": "pt4", 00:16:10.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.221 "is_configured": true, 00:16:10.221 "data_offset": 2048, 00:16:10.221 "data_size": 63488 00:16:10.221 } 00:16:10.221 ] 00:16:10.221 }' 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.221 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 [2024-12-12 09:29:44.642857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.791 [2024-12-12 09:29:44.642884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.791 [2024-12-12 09:29:44.642943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.791 [2024-12-12 09:29:44.643022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.791 [2024-12-12 09:29:44.643031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 [2024-12-12 09:29:44.738693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.791 [2024-12-12 09:29:44.738738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.791 [2024-12-12 09:29:44.738755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:10.791 [2024-12-12 09:29:44.738763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.791 [2024-12-12 09:29:44.741184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.791 [2024-12-12 09:29:44.741216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.791 [2024-12-12 09:29:44.741287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:10.791 [2024-12-12 09:29:44.741332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.791 pt2 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.791 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.792 "name": "raid_bdev1", 00:16:10.792 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:10.792 "strip_size_kb": 64, 00:16:10.792 "state": "configuring", 00:16:10.792 "raid_level": "raid5f", 00:16:10.792 "superblock": true, 00:16:10.792 "num_base_bdevs": 4, 00:16:10.792 "num_base_bdevs_discovered": 1, 00:16:10.792 "num_base_bdevs_operational": 3, 00:16:10.792 "base_bdevs_list": [ 00:16:10.792 { 00:16:10.792 "name": null, 00:16:10.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.792 "is_configured": false, 00:16:10.792 "data_offset": 2048, 00:16:10.792 "data_size": 63488 00:16:10.792 }, 00:16:10.792 { 00:16:10.792 "name": "pt2", 00:16:10.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.792 "is_configured": true, 00:16:10.792 "data_offset": 2048, 00:16:10.792 "data_size": 63488 00:16:10.792 }, 00:16:10.792 { 00:16:10.792 "name": null, 00:16:10.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.792 "is_configured": false, 00:16:10.792 "data_offset": 2048, 00:16:10.792 "data_size": 63488 00:16:10.792 }, 00:16:10.792 { 00:16:10.792 "name": null, 00:16:10.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.792 "is_configured": false, 00:16:10.792 "data_offset": 2048, 00:16:10.792 "data_size": 63488 00:16:10.792 } 00:16:10.792 ] 00:16:10.792 }' 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.792 09:29:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.361 [2024-12-12 09:29:45.197889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:11.361 [2024-12-12 09:29:45.197946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.361 [2024-12-12 09:29:45.197974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:11.361 [2024-12-12 09:29:45.197982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.361 [2024-12-12 09:29:45.198313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.361 [2024-12-12 09:29:45.198328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:11.361 [2024-12-12 09:29:45.198385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:11.361 [2024-12-12 09:29:45.198401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:11.361 pt3 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.361 "name": "raid_bdev1", 00:16:11.361 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:11.361 "strip_size_kb": 64, 00:16:11.361 "state": "configuring", 00:16:11.361 "raid_level": "raid5f", 00:16:11.361 "superblock": true, 00:16:11.361 "num_base_bdevs": 4, 00:16:11.361 "num_base_bdevs_discovered": 2, 00:16:11.361 "num_base_bdevs_operational": 3, 00:16:11.361 "base_bdevs_list": [ 00:16:11.361 { 00:16:11.361 "name": null, 00:16:11.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.361 "is_configured": false, 00:16:11.361 "data_offset": 2048, 00:16:11.361 "data_size": 63488 00:16:11.361 }, 00:16:11.361 { 00:16:11.361 "name": "pt2", 00:16:11.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.361 "is_configured": true, 00:16:11.361 "data_offset": 2048, 00:16:11.361 "data_size": 63488 00:16:11.361 }, 00:16:11.361 { 00:16:11.361 "name": "pt3", 00:16:11.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.361 "is_configured": true, 00:16:11.361 "data_offset": 2048, 00:16:11.361 "data_size": 63488 00:16:11.361 }, 00:16:11.361 { 00:16:11.361 "name": null, 00:16:11.361 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:11.361 "is_configured": false, 00:16:11.361 "data_offset": 2048, 00:16:11.361 "data_size": 63488 00:16:11.361 } 00:16:11.361 ] 00:16:11.361 }' 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.361 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 [2024-12-12 09:29:45.681087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:11.931 [2024-12-12 09:29:45.681190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.931 [2024-12-12 09:29:45.681226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:11.931 [2024-12-12 09:29:45.681253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.931 [2024-12-12 09:29:45.681637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.931 [2024-12-12 09:29:45.681699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:11.931 [2024-12-12 09:29:45.681785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:11.931 [2024-12-12 09:29:45.681836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:11.931 [2024-12-12 09:29:45.682001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:11.931 [2024-12-12 09:29:45.682041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.931 [2024-12-12 09:29:45.682302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.931 [2024-12-12 09:29:45.688997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:11.931 [2024-12-12 09:29:45.689058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:11.931 [2024-12-12 09:29:45.689366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.931 pt4 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.931 "name": "raid_bdev1", 00:16:11.931 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:11.931 "strip_size_kb": 64, 00:16:11.931 "state": "online", 00:16:11.931 "raid_level": "raid5f", 00:16:11.931 "superblock": true, 00:16:11.931 "num_base_bdevs": 4, 00:16:11.931 "num_base_bdevs_discovered": 3, 00:16:11.931 "num_base_bdevs_operational": 3, 00:16:11.931 "base_bdevs_list": [ 00:16:11.931 { 00:16:11.931 "name": null, 00:16:11.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.931 "is_configured": false, 00:16:11.931 "data_offset": 2048, 00:16:11.931 "data_size": 63488 00:16:11.931 }, 00:16:11.931 { 00:16:11.931 "name": "pt2", 00:16:11.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.931 "is_configured": true, 00:16:11.931 "data_offset": 2048, 00:16:11.931 "data_size": 63488 00:16:11.931 }, 00:16:11.931 { 00:16:11.931 "name": "pt3", 00:16:11.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.931 "is_configured": true, 00:16:11.931 "data_offset": 2048, 00:16:11.931 "data_size": 63488 00:16:11.931 }, 00:16:11.931 { 00:16:11.931 "name": "pt4", 00:16:11.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:11.931 "is_configured": true, 00:16:11.931 "data_offset": 2048, 00:16:11.931 "data_size": 63488 00:16:11.931 } 00:16:11.931 ] 00:16:11.931 }' 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.931 09:29:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.191 [2024-12-12 09:29:46.137777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.191 [2024-12-12 09:29:46.137848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.191 [2024-12-12 09:29:46.137906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.191 [2024-12-12 09:29:46.137991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.191 [2024-12-12 09:29:46.138003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:12.191 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.192 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.192 [2024-12-12 09:29:46.213649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:12.452 [2024-12-12 09:29:46.213761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.452 [2024-12-12 09:29:46.213789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:12.452 [2024-12-12 09:29:46.213801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.452 [2024-12-12 09:29:46.216335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.452 [2024-12-12 09:29:46.216421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:12.452 [2024-12-12 09:29:46.216496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:12.452 [2024-12-12 09:29:46.216543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:12.452 [2024-12-12 09:29:46.216664] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:12.452 [2024-12-12 09:29:46.216679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.452 [2024-12-12 09:29:46.216693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:12.452 [2024-12-12 09:29:46.216754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:12.452 [2024-12-12 09:29:46.216838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:12.452 pt1 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.452 "name": "raid_bdev1", 00:16:12.452 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:12.452 "strip_size_kb": 64, 00:16:12.452 "state": "configuring", 00:16:12.452 "raid_level": "raid5f", 00:16:12.452 "superblock": true, 00:16:12.452 "num_base_bdevs": 4, 00:16:12.452 "num_base_bdevs_discovered": 2, 00:16:12.452 "num_base_bdevs_operational": 3, 00:16:12.452 "base_bdevs_list": [ 00:16:12.452 { 00:16:12.452 "name": null, 00:16:12.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.452 "is_configured": false, 00:16:12.452 "data_offset": 2048, 00:16:12.452 "data_size": 63488 00:16:12.452 }, 00:16:12.452 { 00:16:12.452 "name": "pt2", 00:16:12.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.452 "is_configured": true, 00:16:12.452 "data_offset": 2048, 00:16:12.452 "data_size": 63488 00:16:12.452 }, 00:16:12.452 { 00:16:12.452 "name": "pt3", 00:16:12.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:12.452 "is_configured": true, 00:16:12.452 "data_offset": 2048, 00:16:12.452 "data_size": 63488 00:16:12.452 }, 00:16:12.452 { 00:16:12.452 "name": null, 00:16:12.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:12.452 "is_configured": false, 00:16:12.452 "data_offset": 2048, 00:16:12.452 "data_size": 63488 00:16:12.452 } 00:16:12.452 ] 00:16:12.452 }' 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.452 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.712 [2024-12-12 09:29:46.716798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:12.712 [2024-12-12 09:29:46.716888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.712 [2024-12-12 09:29:46.716909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:12.712 [2024-12-12 09:29:46.716917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.712 [2024-12-12 09:29:46.717323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.712 [2024-12-12 09:29:46.717341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:12.712 [2024-12-12 09:29:46.717399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:12.712 [2024-12-12 09:29:46.717416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:12.712 [2024-12-12 09:29:46.717534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:12.712 [2024-12-12 09:29:46.717543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:12.712 [2024-12-12 09:29:46.717804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.712 pt4 00:16:12.712 [2024-12-12 09:29:46.724457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:12.712 [2024-12-12 09:29:46.724482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:12.712 [2024-12-12 09:29:46.724718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.712 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.972 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.972 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.972 "name": "raid_bdev1", 00:16:12.972 "uuid": "4df67c5c-b476-431c-91a9-17238b5fd5a0", 00:16:12.972 "strip_size_kb": 64, 00:16:12.972 "state": "online", 00:16:12.972 "raid_level": "raid5f", 00:16:12.972 "superblock": true, 00:16:12.972 "num_base_bdevs": 4, 00:16:12.972 "num_base_bdevs_discovered": 3, 00:16:12.972 "num_base_bdevs_operational": 3, 00:16:12.972 "base_bdevs_list": [ 00:16:12.972 { 00:16:12.972 "name": null, 00:16:12.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.972 "is_configured": false, 00:16:12.972 "data_offset": 2048, 00:16:12.972 "data_size": 63488 00:16:12.972 }, 00:16:12.972 { 00:16:12.972 "name": "pt2", 00:16:12.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.972 "is_configured": true, 00:16:12.972 "data_offset": 2048, 00:16:12.972 "data_size": 63488 00:16:12.972 }, 00:16:12.972 { 00:16:12.972 "name": "pt3", 00:16:12.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:12.972 "is_configured": true, 00:16:12.972 "data_offset": 2048, 00:16:12.972 "data_size": 63488 00:16:12.972 }, 00:16:12.972 { 00:16:12.972 "name": "pt4", 00:16:12.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:12.972 "is_configured": true, 00:16:12.972 "data_offset": 2048, 00:16:12.972 "data_size": 63488 00:16:12.972 } 00:16:12.972 ] 00:16:12.972 }' 00:16:12.972 09:29:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.972 09:29:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:13.232 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.492 [2024-12-12 09:29:47.256753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4df67c5c-b476-431c-91a9-17238b5fd5a0 '!=' 4df67c5c-b476-431c-91a9-17238b5fd5a0 ']' 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85252 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85252 ']' 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85252 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85252 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85252' 00:16:13.492 killing process with pid 85252 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 85252 00:16:13.492 [2024-12-12 09:29:47.344998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.492 [2024-12-12 09:29:47.345059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.492 [2024-12-12 09:29:47.345119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.492 [2024-12-12 09:29:47.345134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:13.492 09:29:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 85252 00:16:13.751 [2024-12-12 09:29:47.751171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.133 ************************************ 00:16:15.133 END TEST raid5f_superblock_test 00:16:15.133 09:29:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:15.133 00:16:15.133 real 0m8.816s 00:16:15.133 user 0m13.740s 00:16:15.133 sys 0m1.729s 00:16:15.133 09:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.133 09:29:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.133 ************************************ 00:16:15.133 09:29:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:15.133 09:29:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:15.133 09:29:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:15.133 09:29:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.133 09:29:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.133 ************************************ 00:16:15.133 START TEST raid5f_rebuild_test 00:16:15.133 ************************************ 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:15.133 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85739 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85739 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85739 ']' 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.133 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.133 [2024-12-12 09:29:49.105727] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:16:15.133 [2024-12-12 09:29:49.105997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:15.133 Zero copy mechanism will not be used. 00:16:15.133 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85739 ] 00:16:15.393 [2024-12-12 09:29:49.294047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.653 [2024-12-12 09:29:49.421767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.653 [2024-12-12 09:29:49.647595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.653 [2024-12-12 09:29:49.647748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 BaseBdev1_malloc 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 [2024-12-12 09:29:49.994416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.223 [2024-12-12 09:29:49.994491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.223 [2024-12-12 09:29:49.994514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.223 [2024-12-12 09:29:49.994526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.223 [2024-12-12 09:29:49.996858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.223 [2024-12-12 09:29:49.996895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.223 BaseBdev1 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 BaseBdev2_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 [2024-12-12 09:29:50.056416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:16.223 [2024-12-12 09:29:50.056579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.223 [2024-12-12 09:29:50.056604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.223 [2024-12-12 09:29:50.056618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.223 [2024-12-12 09:29:50.058983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.223 [2024-12-12 09:29:50.059016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.223 BaseBdev2 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 BaseBdev3_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 [2024-12-12 09:29:50.151045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:16.223 [2024-12-12 09:29:50.151093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.223 [2024-12-12 09:29:50.151114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.223 [2024-12-12 09:29:50.151124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.223 [2024-12-12 09:29:50.153438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.223 [2024-12-12 09:29:50.153477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:16.223 BaseBdev3 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 BaseBdev4_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.223 [2024-12-12 09:29:50.207620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:16.223 [2024-12-12 09:29:50.207771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.223 [2024-12-12 09:29:50.207811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.223 [2024-12-12 09:29:50.207844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.223 [2024-12-12 09:29:50.210180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.223 [2024-12-12 09:29:50.210257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:16.223 BaseBdev4 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.223 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.483 spare_malloc 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.483 spare_delay 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.483 [2024-12-12 09:29:50.279053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.483 [2024-12-12 09:29:50.279100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.483 [2024-12-12 09:29:50.279117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:16.483 [2024-12-12 09:29:50.279127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.483 [2024-12-12 09:29:50.281473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.483 [2024-12-12 09:29:50.281588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.483 spare 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.483 [2024-12-12 09:29:50.291098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.483 [2024-12-12 09:29:50.293121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.483 [2024-12-12 09:29:50.293189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.483 [2024-12-12 09:29:50.293239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:16.483 [2024-12-12 09:29:50.293329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.483 [2024-12-12 09:29:50.293344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:16.483 [2024-12-12 09:29:50.293593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:16.483 [2024-12-12 09:29:50.300335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.483 [2024-12-12 09:29:50.300356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.483 [2024-12-12 09:29:50.300526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.483 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.483 "name": "raid_bdev1", 00:16:16.483 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:16.483 "strip_size_kb": 64, 00:16:16.483 "state": "online", 00:16:16.483 "raid_level": "raid5f", 00:16:16.483 "superblock": false, 00:16:16.483 "num_base_bdevs": 4, 00:16:16.483 "num_base_bdevs_discovered": 4, 00:16:16.483 "num_base_bdevs_operational": 4, 00:16:16.483 "base_bdevs_list": [ 00:16:16.483 { 00:16:16.483 "name": "BaseBdev1", 00:16:16.483 "uuid": "358ed7f7-6a76-5621-8201-2203e135a2e3", 00:16:16.483 "is_configured": true, 00:16:16.483 "data_offset": 0, 00:16:16.483 "data_size": 65536 00:16:16.483 }, 00:16:16.483 { 00:16:16.483 "name": "BaseBdev2", 00:16:16.483 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:16.483 "is_configured": true, 00:16:16.484 "data_offset": 0, 00:16:16.484 "data_size": 65536 00:16:16.484 }, 00:16:16.484 { 00:16:16.484 "name": "BaseBdev3", 00:16:16.484 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:16.484 "is_configured": true, 00:16:16.484 "data_offset": 0, 00:16:16.484 "data_size": 65536 00:16:16.484 }, 00:16:16.484 { 00:16:16.484 "name": "BaseBdev4", 00:16:16.484 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:16.484 "is_configured": true, 00:16:16.484 "data_offset": 0, 00:16:16.484 "data_size": 65536 00:16:16.484 } 00:16:16.484 ] 00:16:16.484 }' 00:16:16.484 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.484 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.743 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.743 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:16.743 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.743 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.743 [2024-12-12 09:29:50.741068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.743 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.003 09:29:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:17.003 [2024-12-12 09:29:51.008529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:17.263 /dev/nbd0 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.263 1+0 records in 00:16:17.263 1+0 records out 00:16:17.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481883 s, 8.5 MB/s 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:17.263 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:17.834 512+0 records in 00:16:17.834 512+0 records out 00:16:17.834 100663296 bytes (101 MB, 96 MiB) copied, 0.657784 s, 153 MB/s 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.834 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.094 [2024-12-12 09:29:51.958630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.094 [2024-12-12 09:29:51.973482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.094 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.094 09:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.094 09:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.094 "name": "raid_bdev1", 00:16:18.094 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:18.094 "strip_size_kb": 64, 00:16:18.094 "state": "online", 00:16:18.094 "raid_level": "raid5f", 00:16:18.094 "superblock": false, 00:16:18.094 "num_base_bdevs": 4, 00:16:18.094 "num_base_bdevs_discovered": 3, 00:16:18.094 "num_base_bdevs_operational": 3, 00:16:18.094 "base_bdevs_list": [ 00:16:18.094 { 00:16:18.094 "name": null, 00:16:18.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.094 "is_configured": false, 00:16:18.094 "data_offset": 0, 00:16:18.094 "data_size": 65536 00:16:18.094 }, 00:16:18.094 { 00:16:18.094 "name": "BaseBdev2", 00:16:18.094 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:18.094 "is_configured": true, 00:16:18.094 "data_offset": 0, 00:16:18.094 "data_size": 65536 00:16:18.094 }, 00:16:18.094 { 00:16:18.094 "name": "BaseBdev3", 00:16:18.094 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:18.094 "is_configured": true, 00:16:18.094 "data_offset": 0, 00:16:18.094 "data_size": 65536 00:16:18.094 }, 00:16:18.094 { 00:16:18.094 "name": "BaseBdev4", 00:16:18.094 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:18.094 "is_configured": true, 00:16:18.094 "data_offset": 0, 00:16:18.094 "data_size": 65536 00:16:18.094 } 00:16:18.094 ] 00:16:18.094 }' 00:16:18.094 09:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.094 09:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 09:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.664 09:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.664 09:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 [2024-12-12 09:29:52.460585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.664 [2024-12-12 09:29:52.476458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:18.664 09:29:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.664 09:29:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:18.664 [2024-12-12 09:29:52.486688] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.604 "name": "raid_bdev1", 00:16:19.604 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:19.604 "strip_size_kb": 64, 00:16:19.604 "state": "online", 00:16:19.604 "raid_level": "raid5f", 00:16:19.604 "superblock": false, 00:16:19.604 "num_base_bdevs": 4, 00:16:19.604 "num_base_bdevs_discovered": 4, 00:16:19.604 "num_base_bdevs_operational": 4, 00:16:19.604 "process": { 00:16:19.604 "type": "rebuild", 00:16:19.604 "target": "spare", 00:16:19.604 "progress": { 00:16:19.604 "blocks": 19200, 00:16:19.604 "percent": 9 00:16:19.604 } 00:16:19.604 }, 00:16:19.604 "base_bdevs_list": [ 00:16:19.604 { 00:16:19.604 "name": "spare", 00:16:19.604 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:19.604 "is_configured": true, 00:16:19.604 "data_offset": 0, 00:16:19.604 "data_size": 65536 00:16:19.604 }, 00:16:19.604 { 00:16:19.604 "name": "BaseBdev2", 00:16:19.604 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:19.604 "is_configured": true, 00:16:19.604 "data_offset": 0, 00:16:19.604 "data_size": 65536 00:16:19.604 }, 00:16:19.604 { 00:16:19.604 "name": "BaseBdev3", 00:16:19.604 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:19.604 "is_configured": true, 00:16:19.604 "data_offset": 0, 00:16:19.604 "data_size": 65536 00:16:19.604 }, 00:16:19.604 { 00:16:19.604 "name": "BaseBdev4", 00:16:19.604 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:19.604 "is_configured": true, 00:16:19.604 "data_offset": 0, 00:16:19.604 "data_size": 65536 00:16:19.604 } 00:16:19.604 ] 00:16:19.604 }' 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.604 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.864 [2024-12-12 09:29:53.641464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.864 [2024-12-12 09:29:53.695101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.864 [2024-12-12 09:29:53.695231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.864 [2024-12-12 09:29:53.695275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.864 [2024-12-12 09:29:53.695318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.864 "name": "raid_bdev1", 00:16:19.864 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:19.864 "strip_size_kb": 64, 00:16:19.864 "state": "online", 00:16:19.864 "raid_level": "raid5f", 00:16:19.864 "superblock": false, 00:16:19.864 "num_base_bdevs": 4, 00:16:19.864 "num_base_bdevs_discovered": 3, 00:16:19.864 "num_base_bdevs_operational": 3, 00:16:19.864 "base_bdevs_list": [ 00:16:19.864 { 00:16:19.864 "name": null, 00:16:19.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.864 "is_configured": false, 00:16:19.864 "data_offset": 0, 00:16:19.864 "data_size": 65536 00:16:19.864 }, 00:16:19.864 { 00:16:19.864 "name": "BaseBdev2", 00:16:19.864 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:19.864 "is_configured": true, 00:16:19.864 "data_offset": 0, 00:16:19.864 "data_size": 65536 00:16:19.864 }, 00:16:19.864 { 00:16:19.864 "name": "BaseBdev3", 00:16:19.864 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:19.864 "is_configured": true, 00:16:19.864 "data_offset": 0, 00:16:19.864 "data_size": 65536 00:16:19.864 }, 00:16:19.864 { 00:16:19.864 "name": "BaseBdev4", 00:16:19.864 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:19.864 "is_configured": true, 00:16:19.864 "data_offset": 0, 00:16:19.864 "data_size": 65536 00:16:19.864 } 00:16:19.864 ] 00:16:19.864 }' 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.864 09:29:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.123 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.383 "name": "raid_bdev1", 00:16:20.383 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:20.383 "strip_size_kb": 64, 00:16:20.383 "state": "online", 00:16:20.383 "raid_level": "raid5f", 00:16:20.383 "superblock": false, 00:16:20.383 "num_base_bdevs": 4, 00:16:20.383 "num_base_bdevs_discovered": 3, 00:16:20.383 "num_base_bdevs_operational": 3, 00:16:20.383 "base_bdevs_list": [ 00:16:20.383 { 00:16:20.383 "name": null, 00:16:20.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.383 "is_configured": false, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev2", 00:16:20.383 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev3", 00:16:20.383 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev4", 00:16:20.383 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 } 00:16:20.383 ] 00:16:20.383 }' 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.383 [2024-12-12 09:29:54.263801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.383 [2024-12-12 09:29:54.277804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.383 09:29:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:20.383 [2024-12-12 09:29:54.286529] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.321 "name": "raid_bdev1", 00:16:21.321 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:21.321 "strip_size_kb": 64, 00:16:21.321 "state": "online", 00:16:21.321 "raid_level": "raid5f", 00:16:21.321 "superblock": false, 00:16:21.321 "num_base_bdevs": 4, 00:16:21.321 "num_base_bdevs_discovered": 4, 00:16:21.321 "num_base_bdevs_operational": 4, 00:16:21.321 "process": { 00:16:21.321 "type": "rebuild", 00:16:21.321 "target": "spare", 00:16:21.321 "progress": { 00:16:21.321 "blocks": 19200, 00:16:21.321 "percent": 9 00:16:21.321 } 00:16:21.321 }, 00:16:21.321 "base_bdevs_list": [ 00:16:21.321 { 00:16:21.321 "name": "spare", 00:16:21.321 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:21.321 "is_configured": true, 00:16:21.321 "data_offset": 0, 00:16:21.321 "data_size": 65536 00:16:21.321 }, 00:16:21.321 { 00:16:21.321 "name": "BaseBdev2", 00:16:21.321 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:21.321 "is_configured": true, 00:16:21.321 "data_offset": 0, 00:16:21.321 "data_size": 65536 00:16:21.321 }, 00:16:21.321 { 00:16:21.321 "name": "BaseBdev3", 00:16:21.321 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:21.321 "is_configured": true, 00:16:21.321 "data_offset": 0, 00:16:21.321 "data_size": 65536 00:16:21.321 }, 00:16:21.321 { 00:16:21.321 "name": "BaseBdev4", 00:16:21.321 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:21.321 "is_configured": true, 00:16:21.321 "data_offset": 0, 00:16:21.321 "data_size": 65536 00:16:21.321 } 00:16:21.321 ] 00:16:21.321 }' 00:16:21.321 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.579 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.579 "name": "raid_bdev1", 00:16:21.579 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:21.579 "strip_size_kb": 64, 00:16:21.579 "state": "online", 00:16:21.579 "raid_level": "raid5f", 00:16:21.579 "superblock": false, 00:16:21.579 "num_base_bdevs": 4, 00:16:21.579 "num_base_bdevs_discovered": 4, 00:16:21.579 "num_base_bdevs_operational": 4, 00:16:21.579 "process": { 00:16:21.579 "type": "rebuild", 00:16:21.579 "target": "spare", 00:16:21.579 "progress": { 00:16:21.579 "blocks": 21120, 00:16:21.579 "percent": 10 00:16:21.579 } 00:16:21.579 }, 00:16:21.579 "base_bdevs_list": [ 00:16:21.579 { 00:16:21.579 "name": "spare", 00:16:21.579 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:21.579 "is_configured": true, 00:16:21.580 "data_offset": 0, 00:16:21.580 "data_size": 65536 00:16:21.580 }, 00:16:21.580 { 00:16:21.580 "name": "BaseBdev2", 00:16:21.580 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:21.580 "is_configured": true, 00:16:21.580 "data_offset": 0, 00:16:21.580 "data_size": 65536 00:16:21.580 }, 00:16:21.580 { 00:16:21.580 "name": "BaseBdev3", 00:16:21.580 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:21.580 "is_configured": true, 00:16:21.580 "data_offset": 0, 00:16:21.580 "data_size": 65536 00:16:21.580 }, 00:16:21.580 { 00:16:21.580 "name": "BaseBdev4", 00:16:21.580 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:21.580 "is_configured": true, 00:16:21.580 "data_offset": 0, 00:16:21.580 "data_size": 65536 00:16:21.580 } 00:16:21.580 ] 00:16:21.580 }' 00:16:21.580 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.580 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.580 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.580 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.580 09:29:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.957 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.957 "name": "raid_bdev1", 00:16:22.957 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:22.957 "strip_size_kb": 64, 00:16:22.957 "state": "online", 00:16:22.957 "raid_level": "raid5f", 00:16:22.957 "superblock": false, 00:16:22.957 "num_base_bdevs": 4, 00:16:22.957 "num_base_bdevs_discovered": 4, 00:16:22.957 "num_base_bdevs_operational": 4, 00:16:22.957 "process": { 00:16:22.957 "type": "rebuild", 00:16:22.957 "target": "spare", 00:16:22.957 "progress": { 00:16:22.957 "blocks": 42240, 00:16:22.957 "percent": 21 00:16:22.957 } 00:16:22.957 }, 00:16:22.957 "base_bdevs_list": [ 00:16:22.957 { 00:16:22.957 "name": "spare", 00:16:22.957 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:22.957 "is_configured": true, 00:16:22.957 "data_offset": 0, 00:16:22.957 "data_size": 65536 00:16:22.957 }, 00:16:22.957 { 00:16:22.957 "name": "BaseBdev2", 00:16:22.958 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:22.958 "is_configured": true, 00:16:22.958 "data_offset": 0, 00:16:22.958 "data_size": 65536 00:16:22.958 }, 00:16:22.958 { 00:16:22.958 "name": "BaseBdev3", 00:16:22.958 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:22.958 "is_configured": true, 00:16:22.958 "data_offset": 0, 00:16:22.958 "data_size": 65536 00:16:22.958 }, 00:16:22.958 { 00:16:22.958 "name": "BaseBdev4", 00:16:22.958 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:22.958 "is_configured": true, 00:16:22.958 "data_offset": 0, 00:16:22.958 "data_size": 65536 00:16:22.958 } 00:16:22.958 ] 00:16:22.958 }' 00:16:22.958 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.958 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.958 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.958 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.958 09:29:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.900 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.900 "name": "raid_bdev1", 00:16:23.900 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:23.900 "strip_size_kb": 64, 00:16:23.900 "state": "online", 00:16:23.900 "raid_level": "raid5f", 00:16:23.901 "superblock": false, 00:16:23.901 "num_base_bdevs": 4, 00:16:23.901 "num_base_bdevs_discovered": 4, 00:16:23.901 "num_base_bdevs_operational": 4, 00:16:23.901 "process": { 00:16:23.901 "type": "rebuild", 00:16:23.901 "target": "spare", 00:16:23.901 "progress": { 00:16:23.901 "blocks": 65280, 00:16:23.901 "percent": 33 00:16:23.901 } 00:16:23.901 }, 00:16:23.901 "base_bdevs_list": [ 00:16:23.901 { 00:16:23.901 "name": "spare", 00:16:23.901 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:23.901 "is_configured": true, 00:16:23.901 "data_offset": 0, 00:16:23.901 "data_size": 65536 00:16:23.901 }, 00:16:23.901 { 00:16:23.901 "name": "BaseBdev2", 00:16:23.901 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:23.901 "is_configured": true, 00:16:23.901 "data_offset": 0, 00:16:23.901 "data_size": 65536 00:16:23.901 }, 00:16:23.901 { 00:16:23.901 "name": "BaseBdev3", 00:16:23.901 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:23.901 "is_configured": true, 00:16:23.901 "data_offset": 0, 00:16:23.901 "data_size": 65536 00:16:23.901 }, 00:16:23.901 { 00:16:23.901 "name": "BaseBdev4", 00:16:23.901 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:23.901 "is_configured": true, 00:16:23.901 "data_offset": 0, 00:16:23.901 "data_size": 65536 00:16:23.901 } 00:16:23.901 ] 00:16:23.901 }' 00:16:23.901 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.901 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.901 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.901 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.901 09:29:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.839 09:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.099 09:29:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.099 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.099 "name": "raid_bdev1", 00:16:25.099 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:25.099 "strip_size_kb": 64, 00:16:25.099 "state": "online", 00:16:25.099 "raid_level": "raid5f", 00:16:25.099 "superblock": false, 00:16:25.099 "num_base_bdevs": 4, 00:16:25.099 "num_base_bdevs_discovered": 4, 00:16:25.099 "num_base_bdevs_operational": 4, 00:16:25.099 "process": { 00:16:25.099 "type": "rebuild", 00:16:25.099 "target": "spare", 00:16:25.099 "progress": { 00:16:25.100 "blocks": 86400, 00:16:25.100 "percent": 43 00:16:25.100 } 00:16:25.100 }, 00:16:25.100 "base_bdevs_list": [ 00:16:25.100 { 00:16:25.100 "name": "spare", 00:16:25.100 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:25.100 "is_configured": true, 00:16:25.100 "data_offset": 0, 00:16:25.100 "data_size": 65536 00:16:25.100 }, 00:16:25.100 { 00:16:25.100 "name": "BaseBdev2", 00:16:25.100 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:25.100 "is_configured": true, 00:16:25.100 "data_offset": 0, 00:16:25.100 "data_size": 65536 00:16:25.100 }, 00:16:25.100 { 00:16:25.100 "name": "BaseBdev3", 00:16:25.100 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:25.100 "is_configured": true, 00:16:25.100 "data_offset": 0, 00:16:25.100 "data_size": 65536 00:16:25.100 }, 00:16:25.100 { 00:16:25.100 "name": "BaseBdev4", 00:16:25.100 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:25.100 "is_configured": true, 00:16:25.100 "data_offset": 0, 00:16:25.100 "data_size": 65536 00:16:25.100 } 00:16:25.100 ] 00:16:25.100 }' 00:16:25.100 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.100 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.100 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.100 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.100 09:29:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.037 09:29:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.037 09:30:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.037 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.037 "name": "raid_bdev1", 00:16:26.037 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:26.037 "strip_size_kb": 64, 00:16:26.037 "state": "online", 00:16:26.037 "raid_level": "raid5f", 00:16:26.037 "superblock": false, 00:16:26.037 "num_base_bdevs": 4, 00:16:26.037 "num_base_bdevs_discovered": 4, 00:16:26.037 "num_base_bdevs_operational": 4, 00:16:26.037 "process": { 00:16:26.037 "type": "rebuild", 00:16:26.037 "target": "spare", 00:16:26.037 "progress": { 00:16:26.037 "blocks": 107520, 00:16:26.037 "percent": 54 00:16:26.037 } 00:16:26.037 }, 00:16:26.037 "base_bdevs_list": [ 00:16:26.037 { 00:16:26.037 "name": "spare", 00:16:26.037 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 0, 00:16:26.037 "data_size": 65536 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "BaseBdev2", 00:16:26.037 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 0, 00:16:26.037 "data_size": 65536 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "BaseBdev3", 00:16:26.037 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 0, 00:16:26.037 "data_size": 65536 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "BaseBdev4", 00:16:26.037 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 0, 00:16:26.037 "data_size": 65536 00:16:26.037 } 00:16:26.037 ] 00:16:26.037 }' 00:16:26.037 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.295 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.295 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.295 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.295 09:30:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.230 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.230 "name": "raid_bdev1", 00:16:27.230 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:27.230 "strip_size_kb": 64, 00:16:27.230 "state": "online", 00:16:27.230 "raid_level": "raid5f", 00:16:27.230 "superblock": false, 00:16:27.230 "num_base_bdevs": 4, 00:16:27.230 "num_base_bdevs_discovered": 4, 00:16:27.230 "num_base_bdevs_operational": 4, 00:16:27.230 "process": { 00:16:27.230 "type": "rebuild", 00:16:27.230 "target": "spare", 00:16:27.230 "progress": { 00:16:27.230 "blocks": 130560, 00:16:27.230 "percent": 66 00:16:27.230 } 00:16:27.230 }, 00:16:27.230 "base_bdevs_list": [ 00:16:27.230 { 00:16:27.231 "name": "spare", 00:16:27.231 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:27.231 "is_configured": true, 00:16:27.231 "data_offset": 0, 00:16:27.231 "data_size": 65536 00:16:27.231 }, 00:16:27.231 { 00:16:27.231 "name": "BaseBdev2", 00:16:27.231 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:27.231 "is_configured": true, 00:16:27.231 "data_offset": 0, 00:16:27.231 "data_size": 65536 00:16:27.231 }, 00:16:27.231 { 00:16:27.231 "name": "BaseBdev3", 00:16:27.231 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:27.231 "is_configured": true, 00:16:27.231 "data_offset": 0, 00:16:27.231 "data_size": 65536 00:16:27.231 }, 00:16:27.231 { 00:16:27.231 "name": "BaseBdev4", 00:16:27.231 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:27.231 "is_configured": true, 00:16:27.231 "data_offset": 0, 00:16:27.231 "data_size": 65536 00:16:27.231 } 00:16:27.231 ] 00:16:27.231 }' 00:16:27.231 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.231 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.231 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.490 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.490 09:30:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.426 "name": "raid_bdev1", 00:16:28.426 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:28.426 "strip_size_kb": 64, 00:16:28.426 "state": "online", 00:16:28.426 "raid_level": "raid5f", 00:16:28.426 "superblock": false, 00:16:28.426 "num_base_bdevs": 4, 00:16:28.426 "num_base_bdevs_discovered": 4, 00:16:28.426 "num_base_bdevs_operational": 4, 00:16:28.426 "process": { 00:16:28.426 "type": "rebuild", 00:16:28.426 "target": "spare", 00:16:28.426 "progress": { 00:16:28.426 "blocks": 151680, 00:16:28.426 "percent": 77 00:16:28.426 } 00:16:28.426 }, 00:16:28.426 "base_bdevs_list": [ 00:16:28.426 { 00:16:28.426 "name": "spare", 00:16:28.426 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev2", 00:16:28.426 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev3", 00:16:28.426 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev4", 00:16:28.426 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 } 00:16:28.426 ] 00:16:28.426 }' 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.426 09:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.804 "name": "raid_bdev1", 00:16:29.804 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:29.804 "strip_size_kb": 64, 00:16:29.804 "state": "online", 00:16:29.804 "raid_level": "raid5f", 00:16:29.804 "superblock": false, 00:16:29.804 "num_base_bdevs": 4, 00:16:29.804 "num_base_bdevs_discovered": 4, 00:16:29.804 "num_base_bdevs_operational": 4, 00:16:29.804 "process": { 00:16:29.804 "type": "rebuild", 00:16:29.804 "target": "spare", 00:16:29.804 "progress": { 00:16:29.804 "blocks": 174720, 00:16:29.804 "percent": 88 00:16:29.804 } 00:16:29.804 }, 00:16:29.804 "base_bdevs_list": [ 00:16:29.804 { 00:16:29.804 "name": "spare", 00:16:29.804 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:29.804 "is_configured": true, 00:16:29.804 "data_offset": 0, 00:16:29.804 "data_size": 65536 00:16:29.804 }, 00:16:29.804 { 00:16:29.804 "name": "BaseBdev2", 00:16:29.804 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:29.804 "is_configured": true, 00:16:29.804 "data_offset": 0, 00:16:29.804 "data_size": 65536 00:16:29.804 }, 00:16:29.804 { 00:16:29.804 "name": "BaseBdev3", 00:16:29.804 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:29.804 "is_configured": true, 00:16:29.804 "data_offset": 0, 00:16:29.804 "data_size": 65536 00:16:29.804 }, 00:16:29.804 { 00:16:29.804 "name": "BaseBdev4", 00:16:29.804 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:29.804 "is_configured": true, 00:16:29.804 "data_offset": 0, 00:16:29.804 "data_size": 65536 00:16:29.804 } 00:16:29.804 ] 00:16:29.804 }' 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.804 09:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.743 "name": "raid_bdev1", 00:16:30.743 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:30.743 "strip_size_kb": 64, 00:16:30.743 "state": "online", 00:16:30.743 "raid_level": "raid5f", 00:16:30.743 "superblock": false, 00:16:30.743 "num_base_bdevs": 4, 00:16:30.743 "num_base_bdevs_discovered": 4, 00:16:30.743 "num_base_bdevs_operational": 4, 00:16:30.743 "process": { 00:16:30.743 "type": "rebuild", 00:16:30.743 "target": "spare", 00:16:30.743 "progress": { 00:16:30.743 "blocks": 195840, 00:16:30.743 "percent": 99 00:16:30.743 } 00:16:30.743 }, 00:16:30.743 "base_bdevs_list": [ 00:16:30.743 { 00:16:30.743 "name": "spare", 00:16:30.743 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:30.743 "is_configured": true, 00:16:30.743 "data_offset": 0, 00:16:30.743 "data_size": 65536 00:16:30.743 }, 00:16:30.743 { 00:16:30.743 "name": "BaseBdev2", 00:16:30.743 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:30.743 "is_configured": true, 00:16:30.743 "data_offset": 0, 00:16:30.743 "data_size": 65536 00:16:30.743 }, 00:16:30.743 { 00:16:30.743 "name": "BaseBdev3", 00:16:30.743 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:30.743 "is_configured": true, 00:16:30.743 "data_offset": 0, 00:16:30.743 "data_size": 65536 00:16:30.743 }, 00:16:30.743 { 00:16:30.743 "name": "BaseBdev4", 00:16:30.743 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:30.743 "is_configured": true, 00:16:30.743 "data_offset": 0, 00:16:30.743 "data_size": 65536 00:16:30.743 } 00:16:30.743 ] 00:16:30.743 }' 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.743 [2024-12-12 09:30:04.642137] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.743 [2024-12-12 09:30:04.642223] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.743 [2024-12-12 09:30:04.642278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.743 09:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.123 "name": "raid_bdev1", 00:16:32.123 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:32.123 "strip_size_kb": 64, 00:16:32.123 "state": "online", 00:16:32.123 "raid_level": "raid5f", 00:16:32.123 "superblock": false, 00:16:32.123 "num_base_bdevs": 4, 00:16:32.123 "num_base_bdevs_discovered": 4, 00:16:32.123 "num_base_bdevs_operational": 4, 00:16:32.123 "base_bdevs_list": [ 00:16:32.123 { 00:16:32.123 "name": "spare", 00:16:32.123 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.123 "name": "BaseBdev2", 00:16:32.123 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.123 "name": "BaseBdev3", 00:16:32.123 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.123 "name": "BaseBdev4", 00:16:32.123 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 } 00:16:32.123 ] 00:16:32.123 }' 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.123 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.123 "name": "raid_bdev1", 00:16:32.123 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:32.123 "strip_size_kb": 64, 00:16:32.123 "state": "online", 00:16:32.123 "raid_level": "raid5f", 00:16:32.123 "superblock": false, 00:16:32.123 "num_base_bdevs": 4, 00:16:32.123 "num_base_bdevs_discovered": 4, 00:16:32.123 "num_base_bdevs_operational": 4, 00:16:32.123 "base_bdevs_list": [ 00:16:32.123 { 00:16:32.123 "name": "spare", 00:16:32.123 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.123 "name": "BaseBdev2", 00:16:32.123 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.123 "name": "BaseBdev3", 00:16:32.123 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:32.123 "is_configured": true, 00:16:32.123 "data_offset": 0, 00:16:32.123 "data_size": 65536 00:16:32.123 }, 00:16:32.123 { 00:16:32.124 "name": "BaseBdev4", 00:16:32.124 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:32.124 "is_configured": true, 00:16:32.124 "data_offset": 0, 00:16:32.124 "data_size": 65536 00:16:32.124 } 00:16:32.124 ] 00:16:32.124 }' 00:16:32.124 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.124 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.124 09:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.124 "name": "raid_bdev1", 00:16:32.124 "uuid": "5a3dcf53-fbd1-47ba-849d-f42e7ef734c2", 00:16:32.124 "strip_size_kb": 64, 00:16:32.124 "state": "online", 00:16:32.124 "raid_level": "raid5f", 00:16:32.124 "superblock": false, 00:16:32.124 "num_base_bdevs": 4, 00:16:32.124 "num_base_bdevs_discovered": 4, 00:16:32.124 "num_base_bdevs_operational": 4, 00:16:32.124 "base_bdevs_list": [ 00:16:32.124 { 00:16:32.124 "name": "spare", 00:16:32.124 "uuid": "26b0be41-5301-5b30-aa17-22ba67f56b9f", 00:16:32.124 "is_configured": true, 00:16:32.124 "data_offset": 0, 00:16:32.124 "data_size": 65536 00:16:32.124 }, 00:16:32.124 { 00:16:32.124 "name": "BaseBdev2", 00:16:32.124 "uuid": "68d362d5-c7c7-5b45-9446-fe4427726098", 00:16:32.124 "is_configured": true, 00:16:32.124 "data_offset": 0, 00:16:32.124 "data_size": 65536 00:16:32.124 }, 00:16:32.124 { 00:16:32.124 "name": "BaseBdev3", 00:16:32.124 "uuid": "c5645b72-af5c-5d03-b878-381ffe78c44c", 00:16:32.124 "is_configured": true, 00:16:32.124 "data_offset": 0, 00:16:32.124 "data_size": 65536 00:16:32.124 }, 00:16:32.124 { 00:16:32.124 "name": "BaseBdev4", 00:16:32.124 "uuid": "04d034e1-35d2-5d51-8f93-2fbd884adea2", 00:16:32.124 "is_configured": true, 00:16:32.124 "data_offset": 0, 00:16:32.124 "data_size": 65536 00:16:32.124 } 00:16:32.124 ] 00:16:32.124 }' 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.124 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.692 [2024-12-12 09:30:06.472897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.692 [2024-12-12 09:30:06.473120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.692 [2024-12-12 09:30:06.473275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.692 [2024-12-12 09:30:06.473419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.692 [2024-12-12 09:30:06.473434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.692 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:32.952 /dev/nbd0 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.952 1+0 records in 00:16:32.952 1+0 records out 00:16:32.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570823 s, 7.2 MB/s 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.952 09:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:33.210 /dev/nbd1 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.210 1+0 records in 00:16:33.210 1+0 records out 00:16:33.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381852 s, 10.7 MB/s 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.210 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.469 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.728 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85739 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85739 ']' 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85739 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85739 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.990 killing process with pid 85739 00:16:33.990 Received shutdown signal, test time was about 60.000000 seconds 00:16:33.990 00:16:33.990 Latency(us) 00:16:33.990 [2024-12-12T09:30:08.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.990 [2024-12-12T09:30:08.013Z] =================================================================================================================== 00:16:33.990 [2024-12-12T09:30:08.013Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85739' 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85739 00:16:33.990 [2024-12-12 09:30:07.815169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.990 09:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85739 00:16:34.558 [2024-12-12 09:30:08.359733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:35.946 00:16:35.946 real 0m20.585s 00:16:35.946 user 0m24.317s 00:16:35.946 sys 0m2.637s 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.946 ************************************ 00:16:35.946 END TEST raid5f_rebuild_test 00:16:35.946 ************************************ 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.946 09:30:09 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:35.946 09:30:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:35.946 09:30:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.946 09:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.946 ************************************ 00:16:35.946 START TEST raid5f_rebuild_test_sb 00:16:35.946 ************************************ 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86266 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86266 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86266 ']' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.946 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.946 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:35.946 Zero copy mechanism will not be used. 00:16:35.946 [2024-12-12 09:30:09.763847] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:16:35.946 [2024-12-12 09:30:09.763966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86266 ] 00:16:35.946 [2024-12-12 09:30:09.940072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.223 [2024-12-12 09:30:10.079524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.483 [2024-12-12 09:30:10.324725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.483 [2024-12-12 09:30:10.324800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 BaseBdev1_malloc 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.742 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 [2024-12-12 09:30:10.710366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.742 [2024-12-12 09:30:10.710538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.742 [2024-12-12 09:30:10.710591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:36.742 [2024-12-12 09:30:10.710607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.743 [2024-12-12 09:30:10.713524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.743 [2024-12-12 09:30:10.713633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.743 BaseBdev1 00:16:36.743 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.743 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.743 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:36.743 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.743 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 BaseBdev2_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-12 09:30:10.775140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:37.003 [2024-12-12 09:30:10.775216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.003 [2024-12-12 09:30:10.775239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:37.003 [2024-12-12 09:30:10.775252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.003 [2024-12-12 09:30:10.777761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.003 [2024-12-12 09:30:10.777802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:37.003 BaseBdev2 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 BaseBdev3_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-12 09:30:10.846482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:37.003 [2024-12-12 09:30:10.846555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.003 [2024-12-12 09:30:10.846593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:37.003 [2024-12-12 09:30:10.846606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.003 [2024-12-12 09:30:10.849256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.003 [2024-12-12 09:30:10.849295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:37.003 BaseBdev3 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 BaseBdev4_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-12 09:30:10.900951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:37.003 [2024-12-12 09:30:10.901029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.003 [2024-12-12 09:30:10.901051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:37.003 [2024-12-12 09:30:10.901063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.003 [2024-12-12 09:30:10.903420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.003 [2024-12-12 09:30:10.903548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:37.003 BaseBdev4 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 spare_malloc 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 spare_delay 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-12 09:30:10.970576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.003 [2024-12-12 09:30:10.970657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.003 [2024-12-12 09:30:10.970676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:37.003 [2024-12-12 09:30:10.970687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.003 [2024-12-12 09:30:10.973188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.003 [2024-12-12 09:30:10.973332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.003 spare 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.003 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.003 [2024-12-12 09:30:10.982619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.003 [2024-12-12 09:30:10.984786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.003 [2024-12-12 09:30:10.984920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.003 [2024-12-12 09:30:10.984999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.003 [2024-12-12 09:30:10.985215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:37.004 [2024-12-12 09:30:10.985230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:37.004 [2024-12-12 09:30:10.985545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.004 [2024-12-12 09:30:10.993117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:37.004 [2024-12-12 09:30:10.993179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:37.004 [2024-12-12 09:30:10.993422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.004 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.004 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.004 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.004 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.004 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.004 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.263 "name": "raid_bdev1", 00:16:37.263 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:37.263 "strip_size_kb": 64, 00:16:37.263 "state": "online", 00:16:37.263 "raid_level": "raid5f", 00:16:37.263 "superblock": true, 00:16:37.263 "num_base_bdevs": 4, 00:16:37.263 "num_base_bdevs_discovered": 4, 00:16:37.263 "num_base_bdevs_operational": 4, 00:16:37.263 "base_bdevs_list": [ 00:16:37.263 { 00:16:37.263 "name": "BaseBdev1", 00:16:37.263 "uuid": "847b6650-418b-5e9c-80c6-40229aa6ab71", 00:16:37.263 "is_configured": true, 00:16:37.263 "data_offset": 2048, 00:16:37.263 "data_size": 63488 00:16:37.263 }, 00:16:37.263 { 00:16:37.263 "name": "BaseBdev2", 00:16:37.263 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:37.263 "is_configured": true, 00:16:37.263 "data_offset": 2048, 00:16:37.263 "data_size": 63488 00:16:37.263 }, 00:16:37.263 { 00:16:37.263 "name": "BaseBdev3", 00:16:37.263 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:37.263 "is_configured": true, 00:16:37.263 "data_offset": 2048, 00:16:37.263 "data_size": 63488 00:16:37.263 }, 00:16:37.263 { 00:16:37.263 "name": "BaseBdev4", 00:16:37.263 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:37.263 "is_configured": true, 00:16:37.263 "data_offset": 2048, 00:16:37.263 "data_size": 63488 00:16:37.263 } 00:16:37.263 ] 00:16:37.263 }' 00:16:37.264 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.264 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:37.523 [2024-12-12 09:30:11.406597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.523 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:37.782 [2024-12-12 09:30:11.685927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:37.782 /dev/nbd0 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.782 1+0 records in 00:16:37.782 1+0 records out 00:16:37.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532845 s, 7.7 MB/s 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:37.782 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:38.352 496+0 records in 00:16:38.352 496+0 records out 00:16:38.352 97517568 bytes (98 MB, 93 MiB) copied, 0.569855 s, 171 MB/s 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.352 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.612 [2024-12-12 09:30:12.571356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.612 [2024-12-12 09:30:12.589883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.612 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.871 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.871 "name": "raid_bdev1", 00:16:38.871 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:38.871 "strip_size_kb": 64, 00:16:38.871 "state": "online", 00:16:38.871 "raid_level": "raid5f", 00:16:38.871 "superblock": true, 00:16:38.871 "num_base_bdevs": 4, 00:16:38.871 "num_base_bdevs_discovered": 3, 00:16:38.871 "num_base_bdevs_operational": 3, 00:16:38.871 "base_bdevs_list": [ 00:16:38.871 { 00:16:38.871 "name": null, 00:16:38.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.871 "is_configured": false, 00:16:38.871 "data_offset": 0, 00:16:38.871 "data_size": 63488 00:16:38.871 }, 00:16:38.871 { 00:16:38.871 "name": "BaseBdev2", 00:16:38.871 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:38.871 "is_configured": true, 00:16:38.871 "data_offset": 2048, 00:16:38.871 "data_size": 63488 00:16:38.871 }, 00:16:38.871 { 00:16:38.871 "name": "BaseBdev3", 00:16:38.871 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:38.871 "is_configured": true, 00:16:38.871 "data_offset": 2048, 00:16:38.871 "data_size": 63488 00:16:38.871 }, 00:16:38.871 { 00:16:38.871 "name": "BaseBdev4", 00:16:38.871 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:38.871 "is_configured": true, 00:16:38.871 "data_offset": 2048, 00:16:38.871 "data_size": 63488 00:16:38.871 } 00:16:38.871 ] 00:16:38.871 }' 00:16:38.871 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.871 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.131 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.131 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.131 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.131 [2024-12-12 09:30:13.033219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.131 [2024-12-12 09:30:13.051580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:39.131 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.131 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:39.131 [2024-12-12 09:30:13.062083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.070 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.329 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.329 "name": "raid_bdev1", 00:16:40.329 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:40.329 "strip_size_kb": 64, 00:16:40.329 "state": "online", 00:16:40.329 "raid_level": "raid5f", 00:16:40.329 "superblock": true, 00:16:40.329 "num_base_bdevs": 4, 00:16:40.329 "num_base_bdevs_discovered": 4, 00:16:40.329 "num_base_bdevs_operational": 4, 00:16:40.330 "process": { 00:16:40.330 "type": "rebuild", 00:16:40.330 "target": "spare", 00:16:40.330 "progress": { 00:16:40.330 "blocks": 17280, 00:16:40.330 "percent": 9 00:16:40.330 } 00:16:40.330 }, 00:16:40.330 "base_bdevs_list": [ 00:16:40.330 { 00:16:40.330 "name": "spare", 00:16:40.330 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev2", 00:16:40.330 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev3", 00:16:40.330 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev4", 00:16:40.330 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 } 00:16:40.330 ] 00:16:40.330 }' 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.330 [2024-12-12 09:30:14.209466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.330 [2024-12-12 09:30:14.273292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.330 [2024-12-12 09:30:14.273403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.330 [2024-12-12 09:30:14.273426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.330 [2024-12-12 09:30:14.273440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.330 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.589 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.589 "name": "raid_bdev1", 00:16:40.589 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:40.589 "strip_size_kb": 64, 00:16:40.589 "state": "online", 00:16:40.589 "raid_level": "raid5f", 00:16:40.589 "superblock": true, 00:16:40.589 "num_base_bdevs": 4, 00:16:40.589 "num_base_bdevs_discovered": 3, 00:16:40.589 "num_base_bdevs_operational": 3, 00:16:40.589 "base_bdevs_list": [ 00:16:40.589 { 00:16:40.589 "name": null, 00:16:40.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.589 "is_configured": false, 00:16:40.589 "data_offset": 0, 00:16:40.589 "data_size": 63488 00:16:40.589 }, 00:16:40.589 { 00:16:40.589 "name": "BaseBdev2", 00:16:40.589 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:40.589 "is_configured": true, 00:16:40.589 "data_offset": 2048, 00:16:40.589 "data_size": 63488 00:16:40.589 }, 00:16:40.589 { 00:16:40.589 "name": "BaseBdev3", 00:16:40.589 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:40.589 "is_configured": true, 00:16:40.589 "data_offset": 2048, 00:16:40.589 "data_size": 63488 00:16:40.589 }, 00:16:40.589 { 00:16:40.589 "name": "BaseBdev4", 00:16:40.589 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:40.589 "is_configured": true, 00:16:40.589 "data_offset": 2048, 00:16:40.589 "data_size": 63488 00:16:40.589 } 00:16:40.589 ] 00:16:40.589 }' 00:16:40.589 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.589 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.849 "name": "raid_bdev1", 00:16:40.849 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:40.849 "strip_size_kb": 64, 00:16:40.849 "state": "online", 00:16:40.849 "raid_level": "raid5f", 00:16:40.849 "superblock": true, 00:16:40.849 "num_base_bdevs": 4, 00:16:40.849 "num_base_bdevs_discovered": 3, 00:16:40.849 "num_base_bdevs_operational": 3, 00:16:40.849 "base_bdevs_list": [ 00:16:40.849 { 00:16:40.849 "name": null, 00:16:40.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.849 "is_configured": false, 00:16:40.849 "data_offset": 0, 00:16:40.849 "data_size": 63488 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev2", 00:16:40.849 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 2048, 00:16:40.849 "data_size": 63488 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev3", 00:16:40.849 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 2048, 00:16:40.849 "data_size": 63488 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev4", 00:16:40.849 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 2048, 00:16:40.849 "data_size": 63488 00:16:40.849 } 00:16:40.849 ] 00:16:40.849 }' 00:16:40.849 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.109 [2024-12-12 09:30:14.935797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.109 [2024-12-12 09:30:14.951951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.109 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:41.109 [2024-12-12 09:30:14.962465] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.047 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.048 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.048 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.048 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.048 "name": "raid_bdev1", 00:16:42.048 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:42.048 "strip_size_kb": 64, 00:16:42.048 "state": "online", 00:16:42.048 "raid_level": "raid5f", 00:16:42.048 "superblock": true, 00:16:42.048 "num_base_bdevs": 4, 00:16:42.048 "num_base_bdevs_discovered": 4, 00:16:42.048 "num_base_bdevs_operational": 4, 00:16:42.048 "process": { 00:16:42.048 "type": "rebuild", 00:16:42.048 "target": "spare", 00:16:42.048 "progress": { 00:16:42.048 "blocks": 19200, 00:16:42.048 "percent": 10 00:16:42.048 } 00:16:42.048 }, 00:16:42.048 "base_bdevs_list": [ 00:16:42.048 { 00:16:42.048 "name": "spare", 00:16:42.048 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:42.048 "is_configured": true, 00:16:42.048 "data_offset": 2048, 00:16:42.048 "data_size": 63488 00:16:42.048 }, 00:16:42.048 { 00:16:42.048 "name": "BaseBdev2", 00:16:42.048 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:42.048 "is_configured": true, 00:16:42.048 "data_offset": 2048, 00:16:42.048 "data_size": 63488 00:16:42.048 }, 00:16:42.048 { 00:16:42.048 "name": "BaseBdev3", 00:16:42.048 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:42.048 "is_configured": true, 00:16:42.048 "data_offset": 2048, 00:16:42.048 "data_size": 63488 00:16:42.048 }, 00:16:42.048 { 00:16:42.048 "name": "BaseBdev4", 00:16:42.048 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:42.048 "is_configured": true, 00:16:42.048 "data_offset": 2048, 00:16:42.048 "data_size": 63488 00:16:42.048 } 00:16:42.048 ] 00:16:42.048 }' 00:16:42.048 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.048 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.048 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:42.308 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=642 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.308 "name": "raid_bdev1", 00:16:42.308 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:42.308 "strip_size_kb": 64, 00:16:42.308 "state": "online", 00:16:42.308 "raid_level": "raid5f", 00:16:42.308 "superblock": true, 00:16:42.308 "num_base_bdevs": 4, 00:16:42.308 "num_base_bdevs_discovered": 4, 00:16:42.308 "num_base_bdevs_operational": 4, 00:16:42.308 "process": { 00:16:42.308 "type": "rebuild", 00:16:42.308 "target": "spare", 00:16:42.308 "progress": { 00:16:42.308 "blocks": 21120, 00:16:42.308 "percent": 11 00:16:42.308 } 00:16:42.308 }, 00:16:42.308 "base_bdevs_list": [ 00:16:42.308 { 00:16:42.308 "name": "spare", 00:16:42.308 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:42.308 "is_configured": true, 00:16:42.308 "data_offset": 2048, 00:16:42.308 "data_size": 63488 00:16:42.308 }, 00:16:42.308 { 00:16:42.308 "name": "BaseBdev2", 00:16:42.308 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:42.308 "is_configured": true, 00:16:42.308 "data_offset": 2048, 00:16:42.308 "data_size": 63488 00:16:42.308 }, 00:16:42.308 { 00:16:42.308 "name": "BaseBdev3", 00:16:42.308 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:42.308 "is_configured": true, 00:16:42.308 "data_offset": 2048, 00:16:42.308 "data_size": 63488 00:16:42.308 }, 00:16:42.308 { 00:16:42.308 "name": "BaseBdev4", 00:16:42.308 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:42.308 "is_configured": true, 00:16:42.308 "data_offset": 2048, 00:16:42.308 "data_size": 63488 00:16:42.308 } 00:16:42.308 ] 00:16:42.308 }' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.308 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.272 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.532 "name": "raid_bdev1", 00:16:43.532 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:43.532 "strip_size_kb": 64, 00:16:43.532 "state": "online", 00:16:43.532 "raid_level": "raid5f", 00:16:43.532 "superblock": true, 00:16:43.532 "num_base_bdevs": 4, 00:16:43.532 "num_base_bdevs_discovered": 4, 00:16:43.532 "num_base_bdevs_operational": 4, 00:16:43.532 "process": { 00:16:43.532 "type": "rebuild", 00:16:43.532 "target": "spare", 00:16:43.532 "progress": { 00:16:43.532 "blocks": 42240, 00:16:43.532 "percent": 22 00:16:43.532 } 00:16:43.532 }, 00:16:43.532 "base_bdevs_list": [ 00:16:43.532 { 00:16:43.532 "name": "spare", 00:16:43.532 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:43.532 "is_configured": true, 00:16:43.532 "data_offset": 2048, 00:16:43.532 "data_size": 63488 00:16:43.532 }, 00:16:43.532 { 00:16:43.532 "name": "BaseBdev2", 00:16:43.532 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:43.532 "is_configured": true, 00:16:43.532 "data_offset": 2048, 00:16:43.532 "data_size": 63488 00:16:43.532 }, 00:16:43.532 { 00:16:43.532 "name": "BaseBdev3", 00:16:43.532 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:43.532 "is_configured": true, 00:16:43.532 "data_offset": 2048, 00:16:43.532 "data_size": 63488 00:16:43.532 }, 00:16:43.532 { 00:16:43.532 "name": "BaseBdev4", 00:16:43.532 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:43.532 "is_configured": true, 00:16:43.532 "data_offset": 2048, 00:16:43.532 "data_size": 63488 00:16:43.532 } 00:16:43.532 ] 00:16:43.532 }' 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.532 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.471 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.471 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.472 "name": "raid_bdev1", 00:16:44.472 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:44.472 "strip_size_kb": 64, 00:16:44.472 "state": "online", 00:16:44.472 "raid_level": "raid5f", 00:16:44.472 "superblock": true, 00:16:44.472 "num_base_bdevs": 4, 00:16:44.472 "num_base_bdevs_discovered": 4, 00:16:44.472 "num_base_bdevs_operational": 4, 00:16:44.472 "process": { 00:16:44.472 "type": "rebuild", 00:16:44.472 "target": "spare", 00:16:44.472 "progress": { 00:16:44.472 "blocks": 65280, 00:16:44.472 "percent": 34 00:16:44.472 } 00:16:44.472 }, 00:16:44.472 "base_bdevs_list": [ 00:16:44.472 { 00:16:44.472 "name": "spare", 00:16:44.472 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:44.472 "is_configured": true, 00:16:44.472 "data_offset": 2048, 00:16:44.472 "data_size": 63488 00:16:44.472 }, 00:16:44.472 { 00:16:44.472 "name": "BaseBdev2", 00:16:44.472 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:44.472 "is_configured": true, 00:16:44.472 "data_offset": 2048, 00:16:44.472 "data_size": 63488 00:16:44.472 }, 00:16:44.472 { 00:16:44.472 "name": "BaseBdev3", 00:16:44.472 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:44.472 "is_configured": true, 00:16:44.472 "data_offset": 2048, 00:16:44.472 "data_size": 63488 00:16:44.472 }, 00:16:44.472 { 00:16:44.472 "name": "BaseBdev4", 00:16:44.472 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:44.472 "is_configured": true, 00:16:44.472 "data_offset": 2048, 00:16:44.472 "data_size": 63488 00:16:44.472 } 00:16:44.472 ] 00:16:44.472 }' 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.472 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.732 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.732 09:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.672 "name": "raid_bdev1", 00:16:45.672 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:45.672 "strip_size_kb": 64, 00:16:45.672 "state": "online", 00:16:45.672 "raid_level": "raid5f", 00:16:45.672 "superblock": true, 00:16:45.672 "num_base_bdevs": 4, 00:16:45.672 "num_base_bdevs_discovered": 4, 00:16:45.672 "num_base_bdevs_operational": 4, 00:16:45.672 "process": { 00:16:45.672 "type": "rebuild", 00:16:45.672 "target": "spare", 00:16:45.672 "progress": { 00:16:45.672 "blocks": 86400, 00:16:45.672 "percent": 45 00:16:45.672 } 00:16:45.672 }, 00:16:45.672 "base_bdevs_list": [ 00:16:45.672 { 00:16:45.672 "name": "spare", 00:16:45.672 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:45.672 "is_configured": true, 00:16:45.672 "data_offset": 2048, 00:16:45.672 "data_size": 63488 00:16:45.672 }, 00:16:45.672 { 00:16:45.672 "name": "BaseBdev2", 00:16:45.672 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:45.672 "is_configured": true, 00:16:45.672 "data_offset": 2048, 00:16:45.672 "data_size": 63488 00:16:45.672 }, 00:16:45.672 { 00:16:45.672 "name": "BaseBdev3", 00:16:45.672 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:45.672 "is_configured": true, 00:16:45.672 "data_offset": 2048, 00:16:45.672 "data_size": 63488 00:16:45.672 }, 00:16:45.672 { 00:16:45.672 "name": "BaseBdev4", 00:16:45.672 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:45.672 "is_configured": true, 00:16:45.672 "data_offset": 2048, 00:16:45.672 "data_size": 63488 00:16:45.672 } 00:16:45.672 ] 00:16:45.672 }' 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.672 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.054 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.054 "name": "raid_bdev1", 00:16:47.054 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:47.054 "strip_size_kb": 64, 00:16:47.054 "state": "online", 00:16:47.055 "raid_level": "raid5f", 00:16:47.055 "superblock": true, 00:16:47.055 "num_base_bdevs": 4, 00:16:47.055 "num_base_bdevs_discovered": 4, 00:16:47.055 "num_base_bdevs_operational": 4, 00:16:47.055 "process": { 00:16:47.055 "type": "rebuild", 00:16:47.055 "target": "spare", 00:16:47.055 "progress": { 00:16:47.055 "blocks": 107520, 00:16:47.055 "percent": 56 00:16:47.055 } 00:16:47.055 }, 00:16:47.055 "base_bdevs_list": [ 00:16:47.055 { 00:16:47.055 "name": "spare", 00:16:47.055 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:47.055 "is_configured": true, 00:16:47.055 "data_offset": 2048, 00:16:47.055 "data_size": 63488 00:16:47.055 }, 00:16:47.055 { 00:16:47.055 "name": "BaseBdev2", 00:16:47.055 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:47.055 "is_configured": true, 00:16:47.055 "data_offset": 2048, 00:16:47.055 "data_size": 63488 00:16:47.055 }, 00:16:47.055 { 00:16:47.055 "name": "BaseBdev3", 00:16:47.055 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:47.055 "is_configured": true, 00:16:47.055 "data_offset": 2048, 00:16:47.055 "data_size": 63488 00:16:47.055 }, 00:16:47.055 { 00:16:47.055 "name": "BaseBdev4", 00:16:47.055 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:47.055 "is_configured": true, 00:16:47.055 "data_offset": 2048, 00:16:47.055 "data_size": 63488 00:16:47.055 } 00:16:47.055 ] 00:16:47.055 }' 00:16:47.055 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.055 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.055 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.055 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.055 09:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.994 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.994 "name": "raid_bdev1", 00:16:47.994 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:47.994 "strip_size_kb": 64, 00:16:47.994 "state": "online", 00:16:47.994 "raid_level": "raid5f", 00:16:47.994 "superblock": true, 00:16:47.994 "num_base_bdevs": 4, 00:16:47.994 "num_base_bdevs_discovered": 4, 00:16:47.994 "num_base_bdevs_operational": 4, 00:16:47.995 "process": { 00:16:47.995 "type": "rebuild", 00:16:47.995 "target": "spare", 00:16:47.995 "progress": { 00:16:47.995 "blocks": 130560, 00:16:47.995 "percent": 68 00:16:47.995 } 00:16:47.995 }, 00:16:47.995 "base_bdevs_list": [ 00:16:47.995 { 00:16:47.995 "name": "spare", 00:16:47.995 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 2048, 00:16:47.995 "data_size": 63488 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "name": "BaseBdev2", 00:16:47.995 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 2048, 00:16:47.995 "data_size": 63488 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "name": "BaseBdev3", 00:16:47.995 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 2048, 00:16:47.995 "data_size": 63488 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "name": "BaseBdev4", 00:16:47.995 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 2048, 00:16:47.995 "data_size": 63488 00:16:47.995 } 00:16:47.995 ] 00:16:47.995 }' 00:16:47.995 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.995 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.995 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.995 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.995 09:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.935 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.196 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.196 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.196 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.196 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 09:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.196 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.196 "name": "raid_bdev1", 00:16:49.196 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:49.196 "strip_size_kb": 64, 00:16:49.196 "state": "online", 00:16:49.196 "raid_level": "raid5f", 00:16:49.196 "superblock": true, 00:16:49.196 "num_base_bdevs": 4, 00:16:49.196 "num_base_bdevs_discovered": 4, 00:16:49.196 "num_base_bdevs_operational": 4, 00:16:49.196 "process": { 00:16:49.196 "type": "rebuild", 00:16:49.196 "target": "spare", 00:16:49.196 "progress": { 00:16:49.196 "blocks": 151680, 00:16:49.196 "percent": 79 00:16:49.196 } 00:16:49.196 }, 00:16:49.196 "base_bdevs_list": [ 00:16:49.196 { 00:16:49.196 "name": "spare", 00:16:49.196 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:49.196 "is_configured": true, 00:16:49.196 "data_offset": 2048, 00:16:49.196 "data_size": 63488 00:16:49.196 }, 00:16:49.196 { 00:16:49.196 "name": "BaseBdev2", 00:16:49.196 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:49.197 "is_configured": true, 00:16:49.197 "data_offset": 2048, 00:16:49.197 "data_size": 63488 00:16:49.197 }, 00:16:49.197 { 00:16:49.197 "name": "BaseBdev3", 00:16:49.197 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:49.197 "is_configured": true, 00:16:49.197 "data_offset": 2048, 00:16:49.197 "data_size": 63488 00:16:49.197 }, 00:16:49.197 { 00:16:49.197 "name": "BaseBdev4", 00:16:49.197 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:49.197 "is_configured": true, 00:16:49.197 "data_offset": 2048, 00:16:49.197 "data_size": 63488 00:16:49.197 } 00:16:49.197 ] 00:16:49.197 }' 00:16:49.197 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.197 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.197 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.197 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.197 09:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.139 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.140 "name": "raid_bdev1", 00:16:50.140 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:50.140 "strip_size_kb": 64, 00:16:50.140 "state": "online", 00:16:50.140 "raid_level": "raid5f", 00:16:50.140 "superblock": true, 00:16:50.140 "num_base_bdevs": 4, 00:16:50.140 "num_base_bdevs_discovered": 4, 00:16:50.140 "num_base_bdevs_operational": 4, 00:16:50.140 "process": { 00:16:50.140 "type": "rebuild", 00:16:50.140 "target": "spare", 00:16:50.140 "progress": { 00:16:50.140 "blocks": 172800, 00:16:50.140 "percent": 90 00:16:50.140 } 00:16:50.140 }, 00:16:50.140 "base_bdevs_list": [ 00:16:50.140 { 00:16:50.140 "name": "spare", 00:16:50.140 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:50.140 "is_configured": true, 00:16:50.140 "data_offset": 2048, 00:16:50.140 "data_size": 63488 00:16:50.140 }, 00:16:50.140 { 00:16:50.140 "name": "BaseBdev2", 00:16:50.140 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:50.140 "is_configured": true, 00:16:50.140 "data_offset": 2048, 00:16:50.140 "data_size": 63488 00:16:50.140 }, 00:16:50.140 { 00:16:50.140 "name": "BaseBdev3", 00:16:50.140 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:50.140 "is_configured": true, 00:16:50.140 "data_offset": 2048, 00:16:50.140 "data_size": 63488 00:16:50.140 }, 00:16:50.140 { 00:16:50.140 "name": "BaseBdev4", 00:16:50.140 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:50.140 "is_configured": true, 00:16:50.140 "data_offset": 2048, 00:16:50.140 "data_size": 63488 00:16:50.140 } 00:16:50.140 ] 00:16:50.140 }' 00:16:50.140 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.400 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.400 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.400 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.400 09:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.340 [2024-12-12 09:30:25.036944] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:51.340 [2024-12-12 09:30:25.037157] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:51.340 [2024-12-12 09:30:25.037388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.340 "name": "raid_bdev1", 00:16:51.340 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:51.340 "strip_size_kb": 64, 00:16:51.340 "state": "online", 00:16:51.340 "raid_level": "raid5f", 00:16:51.340 "superblock": true, 00:16:51.340 "num_base_bdevs": 4, 00:16:51.340 "num_base_bdevs_discovered": 4, 00:16:51.340 "num_base_bdevs_operational": 4, 00:16:51.340 "base_bdevs_list": [ 00:16:51.340 { 00:16:51.340 "name": "spare", 00:16:51.340 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:51.340 "is_configured": true, 00:16:51.340 "data_offset": 2048, 00:16:51.340 "data_size": 63488 00:16:51.340 }, 00:16:51.340 { 00:16:51.340 "name": "BaseBdev2", 00:16:51.340 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:51.340 "is_configured": true, 00:16:51.340 "data_offset": 2048, 00:16:51.340 "data_size": 63488 00:16:51.340 }, 00:16:51.340 { 00:16:51.340 "name": "BaseBdev3", 00:16:51.340 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:51.340 "is_configured": true, 00:16:51.340 "data_offset": 2048, 00:16:51.340 "data_size": 63488 00:16:51.340 }, 00:16:51.340 { 00:16:51.340 "name": "BaseBdev4", 00:16:51.340 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:51.340 "is_configured": true, 00:16:51.340 "data_offset": 2048, 00:16:51.340 "data_size": 63488 00:16:51.340 } 00:16:51.340 ] 00:16:51.340 }' 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:51.340 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.600 "name": "raid_bdev1", 00:16:51.600 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:51.600 "strip_size_kb": 64, 00:16:51.600 "state": "online", 00:16:51.600 "raid_level": "raid5f", 00:16:51.600 "superblock": true, 00:16:51.600 "num_base_bdevs": 4, 00:16:51.600 "num_base_bdevs_discovered": 4, 00:16:51.600 "num_base_bdevs_operational": 4, 00:16:51.600 "base_bdevs_list": [ 00:16:51.600 { 00:16:51.600 "name": "spare", 00:16:51.600 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev2", 00:16:51.600 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev3", 00:16:51.600 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev4", 00:16:51.600 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 } 00:16:51.600 ] 00:16:51.600 }' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.600 "name": "raid_bdev1", 00:16:51.600 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:51.600 "strip_size_kb": 64, 00:16:51.600 "state": "online", 00:16:51.600 "raid_level": "raid5f", 00:16:51.600 "superblock": true, 00:16:51.600 "num_base_bdevs": 4, 00:16:51.600 "num_base_bdevs_discovered": 4, 00:16:51.600 "num_base_bdevs_operational": 4, 00:16:51.600 "base_bdevs_list": [ 00:16:51.600 { 00:16:51.600 "name": "spare", 00:16:51.600 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev2", 00:16:51.600 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev3", 00:16:51.600 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 }, 00:16:51.600 { 00:16:51.600 "name": "BaseBdev4", 00:16:51.600 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:51.600 "is_configured": true, 00:16:51.600 "data_offset": 2048, 00:16:51.600 "data_size": 63488 00:16:51.600 } 00:16:51.600 ] 00:16:51.600 }' 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.600 09:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.181 [2024-12-12 09:30:26.008452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.181 [2024-12-12 09:30:26.008605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.181 [2024-12-12 09:30:26.008768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.181 [2024-12-12 09:30:26.008944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.181 [2024-12-12 09:30:26.009039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.181 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:52.441 /dev/nbd0 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.441 1+0 records in 00:16:52.441 1+0 records out 00:16:52.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296589 s, 13.8 MB/s 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.441 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:52.701 /dev/nbd1 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.701 1+0 records in 00:16:52.701 1+0 records out 00:16:52.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474564 s, 8.6 MB/s 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.701 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.960 09:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.219 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.479 [2024-12-12 09:30:27.294255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.479 [2024-12-12 09:30:27.294323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.479 [2024-12-12 09:30:27.294351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:53.479 [2024-12-12 09:30:27.294361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.479 [2024-12-12 09:30:27.297045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.479 [2024-12-12 09:30:27.297125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.479 [2024-12-12 09:30:27.297244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:53.479 [2024-12-12 09:30:27.297303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.479 [2024-12-12 09:30:27.297448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.479 [2024-12-12 09:30:27.297561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.479 [2024-12-12 09:30:27.297649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.479 spare 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.479 [2024-12-12 09:30:27.397566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:53.479 [2024-12-12 09:30:27.397622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.479 [2024-12-12 09:30:27.397990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:53.479 [2024-12-12 09:30:27.405362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:53.479 [2024-12-12 09:30:27.405386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:53.479 [2024-12-12 09:30:27.405611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.479 "name": "raid_bdev1", 00:16:53.479 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:53.479 "strip_size_kb": 64, 00:16:53.479 "state": "online", 00:16:53.479 "raid_level": "raid5f", 00:16:53.479 "superblock": true, 00:16:53.479 "num_base_bdevs": 4, 00:16:53.479 "num_base_bdevs_discovered": 4, 00:16:53.479 "num_base_bdevs_operational": 4, 00:16:53.479 "base_bdevs_list": [ 00:16:53.479 { 00:16:53.479 "name": "spare", 00:16:53.479 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:53.479 "is_configured": true, 00:16:53.479 "data_offset": 2048, 00:16:53.479 "data_size": 63488 00:16:53.479 }, 00:16:53.479 { 00:16:53.479 "name": "BaseBdev2", 00:16:53.479 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:53.479 "is_configured": true, 00:16:53.479 "data_offset": 2048, 00:16:53.479 "data_size": 63488 00:16:53.479 }, 00:16:53.479 { 00:16:53.479 "name": "BaseBdev3", 00:16:53.479 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:53.479 "is_configured": true, 00:16:53.479 "data_offset": 2048, 00:16:53.479 "data_size": 63488 00:16:53.479 }, 00:16:53.479 { 00:16:53.479 "name": "BaseBdev4", 00:16:53.479 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:53.479 "is_configured": true, 00:16:53.479 "data_offset": 2048, 00:16:53.479 "data_size": 63488 00:16:53.479 } 00:16:53.479 ] 00:16:53.479 }' 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.479 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.049 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.049 "name": "raid_bdev1", 00:16:54.049 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:54.049 "strip_size_kb": 64, 00:16:54.050 "state": "online", 00:16:54.050 "raid_level": "raid5f", 00:16:54.050 "superblock": true, 00:16:54.050 "num_base_bdevs": 4, 00:16:54.050 "num_base_bdevs_discovered": 4, 00:16:54.050 "num_base_bdevs_operational": 4, 00:16:54.050 "base_bdevs_list": [ 00:16:54.050 { 00:16:54.050 "name": "spare", 00:16:54.050 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:54.050 "is_configured": true, 00:16:54.050 "data_offset": 2048, 00:16:54.050 "data_size": 63488 00:16:54.050 }, 00:16:54.050 { 00:16:54.050 "name": "BaseBdev2", 00:16:54.050 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:54.050 "is_configured": true, 00:16:54.050 "data_offset": 2048, 00:16:54.050 "data_size": 63488 00:16:54.050 }, 00:16:54.050 { 00:16:54.050 "name": "BaseBdev3", 00:16:54.050 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:54.050 "is_configured": true, 00:16:54.050 "data_offset": 2048, 00:16:54.050 "data_size": 63488 00:16:54.050 }, 00:16:54.050 { 00:16:54.050 "name": "BaseBdev4", 00:16:54.050 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:54.050 "is_configured": true, 00:16:54.050 "data_offset": 2048, 00:16:54.050 "data_size": 63488 00:16:54.050 } 00:16:54.050 ] 00:16:54.050 }' 00:16:54.050 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.050 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.050 09:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.050 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.050 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.050 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:54.050 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.050 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.310 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.310 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.310 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.311 [2024-12-12 09:30:28.099901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.311 "name": "raid_bdev1", 00:16:54.311 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:54.311 "strip_size_kb": 64, 00:16:54.311 "state": "online", 00:16:54.311 "raid_level": "raid5f", 00:16:54.311 "superblock": true, 00:16:54.311 "num_base_bdevs": 4, 00:16:54.311 "num_base_bdevs_discovered": 3, 00:16:54.311 "num_base_bdevs_operational": 3, 00:16:54.311 "base_bdevs_list": [ 00:16:54.311 { 00:16:54.311 "name": null, 00:16:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.311 "is_configured": false, 00:16:54.311 "data_offset": 0, 00:16:54.311 "data_size": 63488 00:16:54.311 }, 00:16:54.311 { 00:16:54.311 "name": "BaseBdev2", 00:16:54.311 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:54.311 "is_configured": true, 00:16:54.311 "data_offset": 2048, 00:16:54.311 "data_size": 63488 00:16:54.311 }, 00:16:54.311 { 00:16:54.311 "name": "BaseBdev3", 00:16:54.311 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:54.311 "is_configured": true, 00:16:54.311 "data_offset": 2048, 00:16:54.311 "data_size": 63488 00:16:54.311 }, 00:16:54.311 { 00:16:54.311 "name": "BaseBdev4", 00:16:54.311 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:54.311 "is_configured": true, 00:16:54.311 "data_offset": 2048, 00:16:54.311 "data_size": 63488 00:16:54.311 } 00:16:54.311 ] 00:16:54.311 }' 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.311 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.571 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.571 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.571 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.571 [2024-12-12 09:30:28.527550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.571 [2024-12-12 09:30:28.527923] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.571 [2024-12-12 09:30:28.527969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:54.571 [2024-12-12 09:30:28.528018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.571 [2024-12-12 09:30:28.545218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:54.571 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.571 09:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:54.571 [2024-12-12 09:30:28.555763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.959 "name": "raid_bdev1", 00:16:55.959 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:55.959 "strip_size_kb": 64, 00:16:55.959 "state": "online", 00:16:55.959 "raid_level": "raid5f", 00:16:55.959 "superblock": true, 00:16:55.959 "num_base_bdevs": 4, 00:16:55.959 "num_base_bdevs_discovered": 4, 00:16:55.959 "num_base_bdevs_operational": 4, 00:16:55.959 "process": { 00:16:55.959 "type": "rebuild", 00:16:55.959 "target": "spare", 00:16:55.959 "progress": { 00:16:55.959 "blocks": 17280, 00:16:55.959 "percent": 9 00:16:55.959 } 00:16:55.959 }, 00:16:55.959 "base_bdevs_list": [ 00:16:55.959 { 00:16:55.959 "name": "spare", 00:16:55.959 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:55.959 "is_configured": true, 00:16:55.959 "data_offset": 2048, 00:16:55.959 "data_size": 63488 00:16:55.959 }, 00:16:55.959 { 00:16:55.959 "name": "BaseBdev2", 00:16:55.959 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:55.959 "is_configured": true, 00:16:55.959 "data_offset": 2048, 00:16:55.959 "data_size": 63488 00:16:55.959 }, 00:16:55.959 { 00:16:55.959 "name": "BaseBdev3", 00:16:55.959 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:55.959 "is_configured": true, 00:16:55.959 "data_offset": 2048, 00:16:55.959 "data_size": 63488 00:16:55.959 }, 00:16:55.959 { 00:16:55.959 "name": "BaseBdev4", 00:16:55.959 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:55.959 "is_configured": true, 00:16:55.959 "data_offset": 2048, 00:16:55.959 "data_size": 63488 00:16:55.959 } 00:16:55.959 ] 00:16:55.959 }' 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.959 [2024-12-12 09:30:29.684048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.959 [2024-12-12 09:30:29.766131] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.959 [2024-12-12 09:30:29.766310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.959 [2024-12-12 09:30:29.766357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.959 [2024-12-12 09:30:29.766386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.959 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.959 "name": "raid_bdev1", 00:16:55.959 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:55.959 "strip_size_kb": 64, 00:16:55.959 "state": "online", 00:16:55.959 "raid_level": "raid5f", 00:16:55.959 "superblock": true, 00:16:55.959 "num_base_bdevs": 4, 00:16:55.959 "num_base_bdevs_discovered": 3, 00:16:55.959 "num_base_bdevs_operational": 3, 00:16:55.960 "base_bdevs_list": [ 00:16:55.960 { 00:16:55.960 "name": null, 00:16:55.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.960 "is_configured": false, 00:16:55.960 "data_offset": 0, 00:16:55.960 "data_size": 63488 00:16:55.960 }, 00:16:55.960 { 00:16:55.960 "name": "BaseBdev2", 00:16:55.960 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:55.960 "is_configured": true, 00:16:55.960 "data_offset": 2048, 00:16:55.960 "data_size": 63488 00:16:55.960 }, 00:16:55.960 { 00:16:55.960 "name": "BaseBdev3", 00:16:55.960 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:55.960 "is_configured": true, 00:16:55.960 "data_offset": 2048, 00:16:55.960 "data_size": 63488 00:16:55.960 }, 00:16:55.960 { 00:16:55.960 "name": "BaseBdev4", 00:16:55.960 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:55.960 "is_configured": true, 00:16:55.960 "data_offset": 2048, 00:16:55.960 "data_size": 63488 00:16:55.960 } 00:16:55.960 ] 00:16:55.960 }' 00:16:55.960 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.960 09:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.233 09:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:56.233 09:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.233 09:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.233 [2024-12-12 09:30:30.240724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:56.233 [2024-12-12 09:30:30.240813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.233 [2024-12-12 09:30:30.240844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:56.233 [2024-12-12 09:30:30.240856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.233 [2024-12-12 09:30:30.241449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.233 [2024-12-12 09:30:30.241474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:56.233 [2024-12-12 09:30:30.241586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:56.233 [2024-12-12 09:30:30.241603] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:56.233 [2024-12-12 09:30:30.241614] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:56.233 [2024-12-12 09:30:30.241644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.493 [2024-12-12 09:30:30.258131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:56.493 spare 00:16:56.493 09:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.493 09:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:56.493 [2024-12-12 09:30:30.267803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.433 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.434 "name": "raid_bdev1", 00:16:57.434 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:57.434 "strip_size_kb": 64, 00:16:57.434 "state": "online", 00:16:57.434 "raid_level": "raid5f", 00:16:57.434 "superblock": true, 00:16:57.434 "num_base_bdevs": 4, 00:16:57.434 "num_base_bdevs_discovered": 4, 00:16:57.434 "num_base_bdevs_operational": 4, 00:16:57.434 "process": { 00:16:57.434 "type": "rebuild", 00:16:57.434 "target": "spare", 00:16:57.434 "progress": { 00:16:57.434 "blocks": 17280, 00:16:57.434 "percent": 9 00:16:57.434 } 00:16:57.434 }, 00:16:57.434 "base_bdevs_list": [ 00:16:57.434 { 00:16:57.434 "name": "spare", 00:16:57.434 "uuid": "b3c9c4e1-86c3-51f5-b043-a3425ff53746", 00:16:57.434 "is_configured": true, 00:16:57.434 "data_offset": 2048, 00:16:57.434 "data_size": 63488 00:16:57.434 }, 00:16:57.434 { 00:16:57.434 "name": "BaseBdev2", 00:16:57.434 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:57.434 "is_configured": true, 00:16:57.434 "data_offset": 2048, 00:16:57.434 "data_size": 63488 00:16:57.434 }, 00:16:57.434 { 00:16:57.434 "name": "BaseBdev3", 00:16:57.434 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:57.434 "is_configured": true, 00:16:57.434 "data_offset": 2048, 00:16:57.434 "data_size": 63488 00:16:57.434 }, 00:16:57.434 { 00:16:57.434 "name": "BaseBdev4", 00:16:57.434 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:57.434 "is_configured": true, 00:16:57.434 "data_offset": 2048, 00:16:57.434 "data_size": 63488 00:16:57.434 } 00:16:57.434 ] 00:16:57.434 }' 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.434 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.434 [2024-12-12 09:30:31.408350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.694 [2024-12-12 09:30:31.480086] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.694 [2024-12-12 09:30:31.480192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.694 [2024-12-12 09:30:31.480220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.694 [2024-12-12 09:30:31.480231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.694 "name": "raid_bdev1", 00:16:57.694 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:57.694 "strip_size_kb": 64, 00:16:57.694 "state": "online", 00:16:57.694 "raid_level": "raid5f", 00:16:57.694 "superblock": true, 00:16:57.694 "num_base_bdevs": 4, 00:16:57.694 "num_base_bdevs_discovered": 3, 00:16:57.694 "num_base_bdevs_operational": 3, 00:16:57.694 "base_bdevs_list": [ 00:16:57.694 { 00:16:57.694 "name": null, 00:16:57.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.694 "is_configured": false, 00:16:57.694 "data_offset": 0, 00:16:57.694 "data_size": 63488 00:16:57.694 }, 00:16:57.694 { 00:16:57.694 "name": "BaseBdev2", 00:16:57.694 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:57.694 "is_configured": true, 00:16:57.694 "data_offset": 2048, 00:16:57.694 "data_size": 63488 00:16:57.694 }, 00:16:57.694 { 00:16:57.694 "name": "BaseBdev3", 00:16:57.694 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:57.694 "is_configured": true, 00:16:57.694 "data_offset": 2048, 00:16:57.694 "data_size": 63488 00:16:57.694 }, 00:16:57.694 { 00:16:57.694 "name": "BaseBdev4", 00:16:57.694 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:57.694 "is_configured": true, 00:16:57.694 "data_offset": 2048, 00:16:57.694 "data_size": 63488 00:16:57.694 } 00:16:57.694 ] 00:16:57.694 }' 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.694 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.954 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.215 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.215 "name": "raid_bdev1", 00:16:58.215 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:58.215 "strip_size_kb": 64, 00:16:58.215 "state": "online", 00:16:58.215 "raid_level": "raid5f", 00:16:58.215 "superblock": true, 00:16:58.215 "num_base_bdevs": 4, 00:16:58.215 "num_base_bdevs_discovered": 3, 00:16:58.215 "num_base_bdevs_operational": 3, 00:16:58.215 "base_bdevs_list": [ 00:16:58.215 { 00:16:58.215 "name": null, 00:16:58.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.215 "is_configured": false, 00:16:58.215 "data_offset": 0, 00:16:58.215 "data_size": 63488 00:16:58.215 }, 00:16:58.215 { 00:16:58.215 "name": "BaseBdev2", 00:16:58.215 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:58.215 "is_configured": true, 00:16:58.215 "data_offset": 2048, 00:16:58.215 "data_size": 63488 00:16:58.215 }, 00:16:58.215 { 00:16:58.215 "name": "BaseBdev3", 00:16:58.215 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:58.215 "is_configured": true, 00:16:58.215 "data_offset": 2048, 00:16:58.215 "data_size": 63488 00:16:58.215 }, 00:16:58.215 { 00:16:58.215 "name": "BaseBdev4", 00:16:58.215 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:58.215 "is_configured": true, 00:16:58.215 "data_offset": 2048, 00:16:58.215 "data_size": 63488 00:16:58.215 } 00:16:58.215 ] 00:16:58.215 }' 00:16:58.215 09:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.215 [2024-12-12 09:30:32.097077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.215 [2024-12-12 09:30:32.097165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.215 [2024-12-12 09:30:32.097195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:58.215 [2024-12-12 09:30:32.097208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.215 [2024-12-12 09:30:32.097837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.215 [2024-12-12 09:30:32.097859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.215 [2024-12-12 09:30:32.097992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:58.215 [2024-12-12 09:30:32.098013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:58.215 [2024-12-12 09:30:32.098029] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:58.215 [2024-12-12 09:30:32.098044] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:58.215 BaseBdev1 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.215 09:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.157 "name": "raid_bdev1", 00:16:59.157 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:59.157 "strip_size_kb": 64, 00:16:59.157 "state": "online", 00:16:59.157 "raid_level": "raid5f", 00:16:59.157 "superblock": true, 00:16:59.157 "num_base_bdevs": 4, 00:16:59.157 "num_base_bdevs_discovered": 3, 00:16:59.157 "num_base_bdevs_operational": 3, 00:16:59.157 "base_bdevs_list": [ 00:16:59.157 { 00:16:59.157 "name": null, 00:16:59.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.157 "is_configured": false, 00:16:59.157 "data_offset": 0, 00:16:59.157 "data_size": 63488 00:16:59.157 }, 00:16:59.157 { 00:16:59.157 "name": "BaseBdev2", 00:16:59.157 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:59.157 "is_configured": true, 00:16:59.157 "data_offset": 2048, 00:16:59.157 "data_size": 63488 00:16:59.157 }, 00:16:59.157 { 00:16:59.157 "name": "BaseBdev3", 00:16:59.157 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:59.157 "is_configured": true, 00:16:59.157 "data_offset": 2048, 00:16:59.157 "data_size": 63488 00:16:59.157 }, 00:16:59.157 { 00:16:59.157 "name": "BaseBdev4", 00:16:59.157 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:59.157 "is_configured": true, 00:16:59.157 "data_offset": 2048, 00:16:59.157 "data_size": 63488 00:16:59.157 } 00:16:59.157 ] 00:16:59.157 }' 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.157 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.726 "name": "raid_bdev1", 00:16:59.726 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:16:59.726 "strip_size_kb": 64, 00:16:59.726 "state": "online", 00:16:59.726 "raid_level": "raid5f", 00:16:59.726 "superblock": true, 00:16:59.726 "num_base_bdevs": 4, 00:16:59.726 "num_base_bdevs_discovered": 3, 00:16:59.726 "num_base_bdevs_operational": 3, 00:16:59.726 "base_bdevs_list": [ 00:16:59.726 { 00:16:59.726 "name": null, 00:16:59.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.726 "is_configured": false, 00:16:59.726 "data_offset": 0, 00:16:59.726 "data_size": 63488 00:16:59.726 }, 00:16:59.726 { 00:16:59.726 "name": "BaseBdev2", 00:16:59.726 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:16:59.726 "is_configured": true, 00:16:59.726 "data_offset": 2048, 00:16:59.726 "data_size": 63488 00:16:59.726 }, 00:16:59.726 { 00:16:59.726 "name": "BaseBdev3", 00:16:59.726 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:16:59.726 "is_configured": true, 00:16:59.726 "data_offset": 2048, 00:16:59.726 "data_size": 63488 00:16:59.726 }, 00:16:59.726 { 00:16:59.726 "name": "BaseBdev4", 00:16:59.726 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:16:59.726 "is_configured": true, 00:16:59.726 "data_offset": 2048, 00:16:59.726 "data_size": 63488 00:16:59.726 } 00:16:59.726 ] 00:16:59.726 }' 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.726 [2024-12-12 09:30:33.698791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.726 [2024-12-12 09:30:33.699090] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:59.726 [2024-12-12 09:30:33.699114] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:59.726 request: 00:16:59.726 { 00:16:59.726 "base_bdev": "BaseBdev1", 00:16:59.726 "raid_bdev": "raid_bdev1", 00:16:59.726 "method": "bdev_raid_add_base_bdev", 00:16:59.726 "req_id": 1 00:16:59.726 } 00:16:59.726 Got JSON-RPC error response 00:16:59.726 response: 00:16:59.726 { 00:16:59.726 "code": -22, 00:16:59.726 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:59.726 } 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.726 09:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.105 "name": "raid_bdev1", 00:17:01.105 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:17:01.105 "strip_size_kb": 64, 00:17:01.105 "state": "online", 00:17:01.105 "raid_level": "raid5f", 00:17:01.105 "superblock": true, 00:17:01.105 "num_base_bdevs": 4, 00:17:01.105 "num_base_bdevs_discovered": 3, 00:17:01.105 "num_base_bdevs_operational": 3, 00:17:01.105 "base_bdevs_list": [ 00:17:01.105 { 00:17:01.105 "name": null, 00:17:01.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.105 "is_configured": false, 00:17:01.105 "data_offset": 0, 00:17:01.105 "data_size": 63488 00:17:01.105 }, 00:17:01.105 { 00:17:01.105 "name": "BaseBdev2", 00:17:01.105 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:17:01.105 "is_configured": true, 00:17:01.105 "data_offset": 2048, 00:17:01.105 "data_size": 63488 00:17:01.105 }, 00:17:01.105 { 00:17:01.105 "name": "BaseBdev3", 00:17:01.105 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:17:01.105 "is_configured": true, 00:17:01.105 "data_offset": 2048, 00:17:01.105 "data_size": 63488 00:17:01.105 }, 00:17:01.105 { 00:17:01.105 "name": "BaseBdev4", 00:17:01.105 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:17:01.105 "is_configured": true, 00:17:01.105 "data_offset": 2048, 00:17:01.105 "data_size": 63488 00:17:01.105 } 00:17:01.105 ] 00:17:01.105 }' 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.105 09:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.363 "name": "raid_bdev1", 00:17:01.363 "uuid": "e0d2ab74-6720-40b9-87d8-af1a88fdb02e", 00:17:01.363 "strip_size_kb": 64, 00:17:01.363 "state": "online", 00:17:01.363 "raid_level": "raid5f", 00:17:01.363 "superblock": true, 00:17:01.363 "num_base_bdevs": 4, 00:17:01.363 "num_base_bdevs_discovered": 3, 00:17:01.363 "num_base_bdevs_operational": 3, 00:17:01.363 "base_bdevs_list": [ 00:17:01.363 { 00:17:01.363 "name": null, 00:17:01.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.363 "is_configured": false, 00:17:01.363 "data_offset": 0, 00:17:01.363 "data_size": 63488 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev2", 00:17:01.363 "uuid": "7b98878b-05c0-5d27-991a-710f2c2aa9d4", 00:17:01.363 "is_configured": true, 00:17:01.363 "data_offset": 2048, 00:17:01.363 "data_size": 63488 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev3", 00:17:01.363 "uuid": "b6ee22ad-f87d-5178-ac3d-f904d4f6a729", 00:17:01.363 "is_configured": true, 00:17:01.363 "data_offset": 2048, 00:17:01.363 "data_size": 63488 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev4", 00:17:01.363 "uuid": "1eb4d640-6bf1-56af-8c95-6462da479f80", 00:17:01.363 "is_configured": true, 00:17:01.363 "data_offset": 2048, 00:17:01.363 "data_size": 63488 00:17:01.363 } 00:17:01.363 ] 00:17:01.363 }' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86266 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86266 ']' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86266 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86266 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86266' 00:17:01.363 killing process with pid 86266 00:17:01.363 Received shutdown signal, test time was about 60.000000 seconds 00:17:01.363 00:17:01.363 Latency(us) 00:17:01.363 [2024-12-12T09:30:35.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.363 [2024-12-12T09:30:35.386Z] =================================================================================================================== 00:17:01.363 [2024-12-12T09:30:35.386Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86266 00:17:01.363 [2024-12-12 09:30:35.374831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.363 09:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86266 00:17:01.363 [2024-12-12 09:30:35.375007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.363 [2024-12-12 09:30:35.375108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.363 [2024-12-12 09:30:35.375131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:01.931 [2024-12-12 09:30:35.938273] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.336 09:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:03.336 00:17:03.336 real 0m27.632s 00:17:03.336 user 0m34.431s 00:17:03.336 sys 0m3.304s 00:17:03.336 ************************************ 00:17:03.336 END TEST raid5f_rebuild_test_sb 00:17:03.336 ************************************ 00:17:03.336 09:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.336 09:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.336 09:30:37 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:03.336 09:30:37 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:03.336 09:30:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:03.336 09:30:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.336 09:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.336 ************************************ 00:17:03.336 START TEST raid_state_function_test_sb_4k 00:17:03.336 ************************************ 00:17:03.336 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:03.336 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:03.336 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:03.336 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:03.336 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=87084 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87084' 00:17:03.596 Process raid pid: 87084 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 87084 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87084 ']' 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.596 09:30:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.596 [2024-12-12 09:30:37.466102] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:03.596 [2024-12-12 09:30:37.466372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.856 [2024-12-12 09:30:37.650050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.856 [2024-12-12 09:30:37.811406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.115 [2024-12-12 09:30:38.078570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.115 [2024-12-12 09:30:38.078758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 [2024-12-12 09:30:38.324215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.375 [2024-12-12 09:30:38.324385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.375 [2024-12-12 09:30:38.324424] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.375 [2024-12-12 09:30:38.324454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.375 "name": "Existed_Raid", 00:17:04.375 "uuid": "d6257d11-e46c-45df-97f5-4daa626ba2ef", 00:17:04.375 "strip_size_kb": 0, 00:17:04.375 "state": "configuring", 00:17:04.375 "raid_level": "raid1", 00:17:04.375 "superblock": true, 00:17:04.375 "num_base_bdevs": 2, 00:17:04.375 "num_base_bdevs_discovered": 0, 00:17:04.375 "num_base_bdevs_operational": 2, 00:17:04.375 "base_bdevs_list": [ 00:17:04.375 { 00:17:04.375 "name": "BaseBdev1", 00:17:04.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.375 "is_configured": false, 00:17:04.375 "data_offset": 0, 00:17:04.375 "data_size": 0 00:17:04.375 }, 00:17:04.375 { 00:17:04.375 "name": "BaseBdev2", 00:17:04.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.375 "is_configured": false, 00:17:04.375 "data_offset": 0, 00:17:04.375 "data_size": 0 00:17:04.375 } 00:17:04.375 ] 00:17:04.375 }' 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.375 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 [2024-12-12 09:30:38.831244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.944 [2024-12-12 09:30:38.831303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 [2024-12-12 09:30:38.839214] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.944 [2024-12-12 09:30:38.839273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.944 [2024-12-12 09:30:38.839285] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.944 [2024-12-12 09:30:38.839301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 [2024-12-12 09:30:38.894515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.944 BaseBdev1 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 [ 00:17:04.944 { 00:17:04.944 "name": "BaseBdev1", 00:17:04.944 "aliases": [ 00:17:04.944 "3f174682-cfdc-4c62-868e-52f27fe899fd" 00:17:04.944 ], 00:17:04.944 "product_name": "Malloc disk", 00:17:04.944 "block_size": 4096, 00:17:04.944 "num_blocks": 8192, 00:17:04.944 "uuid": "3f174682-cfdc-4c62-868e-52f27fe899fd", 00:17:04.944 "assigned_rate_limits": { 00:17:04.944 "rw_ios_per_sec": 0, 00:17:04.944 "rw_mbytes_per_sec": 0, 00:17:04.944 "r_mbytes_per_sec": 0, 00:17:04.944 "w_mbytes_per_sec": 0 00:17:04.944 }, 00:17:04.944 "claimed": true, 00:17:04.944 "claim_type": "exclusive_write", 00:17:04.944 "zoned": false, 00:17:04.944 "supported_io_types": { 00:17:04.944 "read": true, 00:17:04.944 "write": true, 00:17:04.944 "unmap": true, 00:17:04.944 "flush": true, 00:17:04.944 "reset": true, 00:17:04.944 "nvme_admin": false, 00:17:04.944 "nvme_io": false, 00:17:04.944 "nvme_io_md": false, 00:17:04.944 "write_zeroes": true, 00:17:04.944 "zcopy": true, 00:17:04.944 "get_zone_info": false, 00:17:04.944 "zone_management": false, 00:17:04.944 "zone_append": false, 00:17:04.944 "compare": false, 00:17:04.944 "compare_and_write": false, 00:17:04.944 "abort": true, 00:17:04.944 "seek_hole": false, 00:17:04.944 "seek_data": false, 00:17:04.944 "copy": true, 00:17:04.944 "nvme_iov_md": false 00:17:04.944 }, 00:17:04.944 "memory_domains": [ 00:17:04.944 { 00:17:04.944 "dma_device_id": "system", 00:17:04.944 "dma_device_type": 1 00:17:04.944 }, 00:17:04.944 { 00:17:04.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.944 "dma_device_type": 2 00:17:04.944 } 00:17:04.944 ], 00:17:04.944 "driver_specific": {} 00:17:04.944 } 00:17:04.944 ] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.944 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.204 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.204 "name": "Existed_Raid", 00:17:05.204 "uuid": "385906e0-a3da-4d42-957f-d0951e6a57c5", 00:17:05.204 "strip_size_kb": 0, 00:17:05.204 "state": "configuring", 00:17:05.204 "raid_level": "raid1", 00:17:05.204 "superblock": true, 00:17:05.204 "num_base_bdevs": 2, 00:17:05.204 "num_base_bdevs_discovered": 1, 00:17:05.204 "num_base_bdevs_operational": 2, 00:17:05.204 "base_bdevs_list": [ 00:17:05.204 { 00:17:05.204 "name": "BaseBdev1", 00:17:05.204 "uuid": "3f174682-cfdc-4c62-868e-52f27fe899fd", 00:17:05.204 "is_configured": true, 00:17:05.204 "data_offset": 256, 00:17:05.204 "data_size": 7936 00:17:05.204 }, 00:17:05.204 { 00:17:05.204 "name": "BaseBdev2", 00:17:05.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.204 "is_configured": false, 00:17:05.204 "data_offset": 0, 00:17:05.204 "data_size": 0 00:17:05.204 } 00:17:05.204 ] 00:17:05.204 }' 00:17:05.204 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.204 09:30:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.463 [2024-12-12 09:30:39.405836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.463 [2024-12-12 09:30:39.405930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.463 [2024-12-12 09:30:39.417895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.463 [2024-12-12 09:30:39.420506] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.463 [2024-12-12 09:30:39.420572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.463 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.464 "name": "Existed_Raid", 00:17:05.464 "uuid": "cfc19450-92c3-4fab-957f-5701169bdc4a", 00:17:05.464 "strip_size_kb": 0, 00:17:05.464 "state": "configuring", 00:17:05.464 "raid_level": "raid1", 00:17:05.464 "superblock": true, 00:17:05.464 "num_base_bdevs": 2, 00:17:05.464 "num_base_bdevs_discovered": 1, 00:17:05.464 "num_base_bdevs_operational": 2, 00:17:05.464 "base_bdevs_list": [ 00:17:05.464 { 00:17:05.464 "name": "BaseBdev1", 00:17:05.464 "uuid": "3f174682-cfdc-4c62-868e-52f27fe899fd", 00:17:05.464 "is_configured": true, 00:17:05.464 "data_offset": 256, 00:17:05.464 "data_size": 7936 00:17:05.464 }, 00:17:05.464 { 00:17:05.464 "name": "BaseBdev2", 00:17:05.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.464 "is_configured": false, 00:17:05.464 "data_offset": 0, 00:17:05.464 "data_size": 0 00:17:05.464 } 00:17:05.464 ] 00:17:05.464 }' 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.464 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.034 [2024-12-12 09:30:39.905633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.034 [2024-12-12 09:30:39.906021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:06.034 [2024-12-12 09:30:39.906041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.034 [2024-12-12 09:30:39.906383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:06.034 [2024-12-12 09:30:39.906613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:06.034 [2024-12-12 09:30:39.906637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:06.034 [2024-12-12 09:30:39.906815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.034 BaseBdev2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.034 [ 00:17:06.034 { 00:17:06.034 "name": "BaseBdev2", 00:17:06.034 "aliases": [ 00:17:06.034 "504f0bda-ded0-4e86-b3a7-074a75886b8d" 00:17:06.034 ], 00:17:06.034 "product_name": "Malloc disk", 00:17:06.034 "block_size": 4096, 00:17:06.034 "num_blocks": 8192, 00:17:06.034 "uuid": "504f0bda-ded0-4e86-b3a7-074a75886b8d", 00:17:06.034 "assigned_rate_limits": { 00:17:06.034 "rw_ios_per_sec": 0, 00:17:06.034 "rw_mbytes_per_sec": 0, 00:17:06.034 "r_mbytes_per_sec": 0, 00:17:06.034 "w_mbytes_per_sec": 0 00:17:06.034 }, 00:17:06.034 "claimed": true, 00:17:06.034 "claim_type": "exclusive_write", 00:17:06.034 "zoned": false, 00:17:06.034 "supported_io_types": { 00:17:06.034 "read": true, 00:17:06.034 "write": true, 00:17:06.034 "unmap": true, 00:17:06.034 "flush": true, 00:17:06.034 "reset": true, 00:17:06.034 "nvme_admin": false, 00:17:06.034 "nvme_io": false, 00:17:06.034 "nvme_io_md": false, 00:17:06.034 "write_zeroes": true, 00:17:06.034 "zcopy": true, 00:17:06.034 "get_zone_info": false, 00:17:06.034 "zone_management": false, 00:17:06.034 "zone_append": false, 00:17:06.034 "compare": false, 00:17:06.034 "compare_and_write": false, 00:17:06.034 "abort": true, 00:17:06.034 "seek_hole": false, 00:17:06.034 "seek_data": false, 00:17:06.034 "copy": true, 00:17:06.034 "nvme_iov_md": false 00:17:06.034 }, 00:17:06.034 "memory_domains": [ 00:17:06.034 { 00:17:06.034 "dma_device_id": "system", 00:17:06.034 "dma_device_type": 1 00:17:06.034 }, 00:17:06.034 { 00:17:06.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.034 "dma_device_type": 2 00:17:06.034 } 00:17:06.034 ], 00:17:06.034 "driver_specific": {} 00:17:06.034 } 00:17:06.034 ] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.034 "name": "Existed_Raid", 00:17:06.034 "uuid": "cfc19450-92c3-4fab-957f-5701169bdc4a", 00:17:06.034 "strip_size_kb": 0, 00:17:06.034 "state": "online", 00:17:06.034 "raid_level": "raid1", 00:17:06.034 "superblock": true, 00:17:06.034 "num_base_bdevs": 2, 00:17:06.034 "num_base_bdevs_discovered": 2, 00:17:06.034 "num_base_bdevs_operational": 2, 00:17:06.034 "base_bdevs_list": [ 00:17:06.034 { 00:17:06.034 "name": "BaseBdev1", 00:17:06.034 "uuid": "3f174682-cfdc-4c62-868e-52f27fe899fd", 00:17:06.034 "is_configured": true, 00:17:06.034 "data_offset": 256, 00:17:06.034 "data_size": 7936 00:17:06.034 }, 00:17:06.034 { 00:17:06.034 "name": "BaseBdev2", 00:17:06.034 "uuid": "504f0bda-ded0-4e86-b3a7-074a75886b8d", 00:17:06.034 "is_configured": true, 00:17:06.034 "data_offset": 256, 00:17:06.034 "data_size": 7936 00:17:06.034 } 00:17:06.034 ] 00:17:06.034 }' 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.034 09:30:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.604 [2024-12-12 09:30:40.437136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.604 "name": "Existed_Raid", 00:17:06.604 "aliases": [ 00:17:06.604 "cfc19450-92c3-4fab-957f-5701169bdc4a" 00:17:06.604 ], 00:17:06.604 "product_name": "Raid Volume", 00:17:06.604 "block_size": 4096, 00:17:06.604 "num_blocks": 7936, 00:17:06.604 "uuid": "cfc19450-92c3-4fab-957f-5701169bdc4a", 00:17:06.604 "assigned_rate_limits": { 00:17:06.604 "rw_ios_per_sec": 0, 00:17:06.604 "rw_mbytes_per_sec": 0, 00:17:06.604 "r_mbytes_per_sec": 0, 00:17:06.604 "w_mbytes_per_sec": 0 00:17:06.604 }, 00:17:06.604 "claimed": false, 00:17:06.604 "zoned": false, 00:17:06.604 "supported_io_types": { 00:17:06.604 "read": true, 00:17:06.604 "write": true, 00:17:06.604 "unmap": false, 00:17:06.604 "flush": false, 00:17:06.604 "reset": true, 00:17:06.604 "nvme_admin": false, 00:17:06.604 "nvme_io": false, 00:17:06.604 "nvme_io_md": false, 00:17:06.604 "write_zeroes": true, 00:17:06.604 "zcopy": false, 00:17:06.604 "get_zone_info": false, 00:17:06.604 "zone_management": false, 00:17:06.604 "zone_append": false, 00:17:06.604 "compare": false, 00:17:06.604 "compare_and_write": false, 00:17:06.604 "abort": false, 00:17:06.604 "seek_hole": false, 00:17:06.604 "seek_data": false, 00:17:06.604 "copy": false, 00:17:06.604 "nvme_iov_md": false 00:17:06.604 }, 00:17:06.604 "memory_domains": [ 00:17:06.604 { 00:17:06.604 "dma_device_id": "system", 00:17:06.604 "dma_device_type": 1 00:17:06.604 }, 00:17:06.604 { 00:17:06.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.604 "dma_device_type": 2 00:17:06.604 }, 00:17:06.604 { 00:17:06.604 "dma_device_id": "system", 00:17:06.604 "dma_device_type": 1 00:17:06.604 }, 00:17:06.604 { 00:17:06.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.604 "dma_device_type": 2 00:17:06.604 } 00:17:06.604 ], 00:17:06.604 "driver_specific": { 00:17:06.604 "raid": { 00:17:06.604 "uuid": "cfc19450-92c3-4fab-957f-5701169bdc4a", 00:17:06.604 "strip_size_kb": 0, 00:17:06.604 "state": "online", 00:17:06.604 "raid_level": "raid1", 00:17:06.604 "superblock": true, 00:17:06.604 "num_base_bdevs": 2, 00:17:06.604 "num_base_bdevs_discovered": 2, 00:17:06.604 "num_base_bdevs_operational": 2, 00:17:06.604 "base_bdevs_list": [ 00:17:06.604 { 00:17:06.604 "name": "BaseBdev1", 00:17:06.604 "uuid": "3f174682-cfdc-4c62-868e-52f27fe899fd", 00:17:06.604 "is_configured": true, 00:17:06.604 "data_offset": 256, 00:17:06.604 "data_size": 7936 00:17:06.604 }, 00:17:06.604 { 00:17:06.604 "name": "BaseBdev2", 00:17:06.604 "uuid": "504f0bda-ded0-4e86-b3a7-074a75886b8d", 00:17:06.604 "is_configured": true, 00:17:06.604 "data_offset": 256, 00:17:06.604 "data_size": 7936 00:17:06.604 } 00:17:06.604 ] 00:17:06.604 } 00:17:06.604 } 00:17:06.604 }' 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:06.604 BaseBdev2' 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.604 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.605 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.865 [2024-12-12 09:30:40.636522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.865 "name": "Existed_Raid", 00:17:06.865 "uuid": "cfc19450-92c3-4fab-957f-5701169bdc4a", 00:17:06.865 "strip_size_kb": 0, 00:17:06.865 "state": "online", 00:17:06.865 "raid_level": "raid1", 00:17:06.865 "superblock": true, 00:17:06.865 "num_base_bdevs": 2, 00:17:06.865 "num_base_bdevs_discovered": 1, 00:17:06.865 "num_base_bdevs_operational": 1, 00:17:06.865 "base_bdevs_list": [ 00:17:06.865 { 00:17:06.865 "name": null, 00:17:06.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.865 "is_configured": false, 00:17:06.865 "data_offset": 0, 00:17:06.865 "data_size": 7936 00:17:06.865 }, 00:17:06.865 { 00:17:06.865 "name": "BaseBdev2", 00:17:06.865 "uuid": "504f0bda-ded0-4e86-b3a7-074a75886b8d", 00:17:06.865 "is_configured": true, 00:17:06.865 "data_offset": 256, 00:17:06.865 "data_size": 7936 00:17:06.865 } 00:17:06.865 ] 00:17:06.865 }' 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.865 09:30:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 [2024-12-12 09:30:41.292096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.436 [2024-12-12 09:30:41.292235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.436 [2024-12-12 09:30:41.403227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.436 [2024-12-12 09:30:41.403302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.436 [2024-12-12 09:30:41.403316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 87084 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87084 ']' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87084 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87084 00:17:07.697 killing process with pid 87084 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87084' 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87084 00:17:07.697 [2024-12-12 09:30:41.504883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.697 09:30:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87084 00:17:07.697 [2024-12-12 09:30:41.523242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.077 ************************************ 00:17:09.077 END TEST raid_state_function_test_sb_4k 00:17:09.077 ************************************ 00:17:09.077 09:30:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:09.077 00:17:09.077 real 0m5.489s 00:17:09.077 user 0m7.725s 00:17:09.077 sys 0m0.998s 00:17:09.077 09:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.077 09:30:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.077 09:30:42 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:09.077 09:30:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:09.077 09:30:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.077 09:30:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.077 ************************************ 00:17:09.077 START TEST raid_superblock_test_4k 00:17:09.077 ************************************ 00:17:09.077 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:09.077 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:09.077 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87336 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87336 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87336 ']' 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.078 09:30:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 [2024-12-12 09:30:43.013163] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:09.078 [2024-12-12 09:30:43.013282] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87336 ] 00:17:09.338 [2024-12-12 09:30:43.190106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.338 [2024-12-12 09:30:43.343914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.606 [2024-12-12 09:30:43.596939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.606 [2024-12-12 09:30:43.597037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.881 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 malloc1 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 [2024-12-12 09:30:43.960979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.140 [2024-12-12 09:30:43.961057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.140 [2024-12-12 09:30:43.961082] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:10.140 [2024-12-12 09:30:43.961094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.140 [2024-12-12 09:30:43.963603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.140 [2024-12-12 09:30:43.963642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.140 pt1 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.140 09:30:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:10.141 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.141 09:30:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.141 malloc2 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.141 [2024-12-12 09:30:44.029258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.141 [2024-12-12 09:30:44.029351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.141 [2024-12-12 09:30:44.029382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:10.141 [2024-12-12 09:30:44.029392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.141 [2024-12-12 09:30:44.032113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.141 [2024-12-12 09:30:44.032157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.141 pt2 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.141 [2024-12-12 09:30:44.041284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.141 [2024-12-12 09:30:44.043437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.141 [2024-12-12 09:30:44.043631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:10.141 [2024-12-12 09:30:44.043654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:10.141 [2024-12-12 09:30:44.043977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:10.141 [2024-12-12 09:30:44.044182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:10.141 [2024-12-12 09:30:44.044207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:10.141 [2024-12-12 09:30:44.044387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.141 "name": "raid_bdev1", 00:17:10.141 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:10.141 "strip_size_kb": 0, 00:17:10.141 "state": "online", 00:17:10.141 "raid_level": "raid1", 00:17:10.141 "superblock": true, 00:17:10.141 "num_base_bdevs": 2, 00:17:10.141 "num_base_bdevs_discovered": 2, 00:17:10.141 "num_base_bdevs_operational": 2, 00:17:10.141 "base_bdevs_list": [ 00:17:10.141 { 00:17:10.141 "name": "pt1", 00:17:10.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.141 "is_configured": true, 00:17:10.141 "data_offset": 256, 00:17:10.141 "data_size": 7936 00:17:10.141 }, 00:17:10.141 { 00:17:10.141 "name": "pt2", 00:17:10.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.141 "is_configured": true, 00:17:10.141 "data_offset": 256, 00:17:10.141 "data_size": 7936 00:17:10.141 } 00:17:10.141 ] 00:17:10.141 }' 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.141 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.709 [2024-12-12 09:30:44.504916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.709 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.709 "name": "raid_bdev1", 00:17:10.709 "aliases": [ 00:17:10.709 "44dfb033-7ce4-470a-98d0-6add39618239" 00:17:10.709 ], 00:17:10.709 "product_name": "Raid Volume", 00:17:10.709 "block_size": 4096, 00:17:10.709 "num_blocks": 7936, 00:17:10.709 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:10.709 "assigned_rate_limits": { 00:17:10.709 "rw_ios_per_sec": 0, 00:17:10.709 "rw_mbytes_per_sec": 0, 00:17:10.709 "r_mbytes_per_sec": 0, 00:17:10.709 "w_mbytes_per_sec": 0 00:17:10.709 }, 00:17:10.709 "claimed": false, 00:17:10.709 "zoned": false, 00:17:10.709 "supported_io_types": { 00:17:10.709 "read": true, 00:17:10.709 "write": true, 00:17:10.709 "unmap": false, 00:17:10.709 "flush": false, 00:17:10.709 "reset": true, 00:17:10.709 "nvme_admin": false, 00:17:10.709 "nvme_io": false, 00:17:10.709 "nvme_io_md": false, 00:17:10.709 "write_zeroes": true, 00:17:10.709 "zcopy": false, 00:17:10.709 "get_zone_info": false, 00:17:10.709 "zone_management": false, 00:17:10.709 "zone_append": false, 00:17:10.709 "compare": false, 00:17:10.709 "compare_and_write": false, 00:17:10.709 "abort": false, 00:17:10.709 "seek_hole": false, 00:17:10.709 "seek_data": false, 00:17:10.709 "copy": false, 00:17:10.709 "nvme_iov_md": false 00:17:10.709 }, 00:17:10.709 "memory_domains": [ 00:17:10.709 { 00:17:10.709 "dma_device_id": "system", 00:17:10.709 "dma_device_type": 1 00:17:10.709 }, 00:17:10.709 { 00:17:10.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.709 "dma_device_type": 2 00:17:10.709 }, 00:17:10.709 { 00:17:10.709 "dma_device_id": "system", 00:17:10.709 "dma_device_type": 1 00:17:10.709 }, 00:17:10.709 { 00:17:10.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.709 "dma_device_type": 2 00:17:10.710 } 00:17:10.710 ], 00:17:10.710 "driver_specific": { 00:17:10.710 "raid": { 00:17:10.710 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:10.710 "strip_size_kb": 0, 00:17:10.710 "state": "online", 00:17:10.710 "raid_level": "raid1", 00:17:10.710 "superblock": true, 00:17:10.710 "num_base_bdevs": 2, 00:17:10.710 "num_base_bdevs_discovered": 2, 00:17:10.710 "num_base_bdevs_operational": 2, 00:17:10.710 "base_bdevs_list": [ 00:17:10.710 { 00:17:10.710 "name": "pt1", 00:17:10.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.710 "is_configured": true, 00:17:10.710 "data_offset": 256, 00:17:10.710 "data_size": 7936 00:17:10.710 }, 00:17:10.710 { 00:17:10.710 "name": "pt2", 00:17:10.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.710 "is_configured": true, 00:17:10.710 "data_offset": 256, 00:17:10.710 "data_size": 7936 00:17:10.710 } 00:17:10.710 ] 00:17:10.710 } 00:17:10.710 } 00:17:10.710 }' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:10.710 pt2' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:10.710 [2024-12-12 09:30:44.712612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.710 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44dfb033-7ce4-470a-98d0-6add39618239 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 44dfb033-7ce4-470a-98d0-6add39618239 ']' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 [2024-12-12 09:30:44.760130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.970 [2024-12-12 09:30:44.760184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.970 [2024-12-12 09:30:44.760307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.970 [2024-12-12 09:30:44.760402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.970 [2024-12-12 09:30:44.760424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 [2024-12-12 09:30:44.899936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:10.970 [2024-12-12 09:30:44.902344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:10.970 [2024-12-12 09:30:44.902434] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:10.970 [2024-12-12 09:30:44.902513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:10.970 [2024-12-12 09:30:44.902532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.970 [2024-12-12 09:30:44.902545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:10.970 request: 00:17:10.970 { 00:17:10.970 "name": "raid_bdev1", 00:17:10.970 "raid_level": "raid1", 00:17:10.970 "base_bdevs": [ 00:17:10.970 "malloc1", 00:17:10.970 "malloc2" 00:17:10.970 ], 00:17:10.970 "superblock": false, 00:17:10.970 "method": "bdev_raid_create", 00:17:10.970 "req_id": 1 00:17:10.970 } 00:17:10.970 Got JSON-RPC error response 00:17:10.970 response: 00:17:10.970 { 00:17:10.970 "code": -17, 00:17:10.970 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:10.970 } 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 [2024-12-12 09:30:44.967896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.970 [2024-12-12 09:30:44.967987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.970 [2024-12-12 09:30:44.968009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:10.970 [2024-12-12 09:30:44.968022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.970 [2024-12-12 09:30:44.970641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.970 [2024-12-12 09:30:44.970677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.970 [2024-12-12 09:30:44.970779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:10.970 [2024-12-12 09:30:44.970842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.970 pt1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.229 09:30:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.230 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.230 "name": "raid_bdev1", 00:17:11.230 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:11.230 "strip_size_kb": 0, 00:17:11.230 "state": "configuring", 00:17:11.230 "raid_level": "raid1", 00:17:11.230 "superblock": true, 00:17:11.230 "num_base_bdevs": 2, 00:17:11.230 "num_base_bdevs_discovered": 1, 00:17:11.230 "num_base_bdevs_operational": 2, 00:17:11.230 "base_bdevs_list": [ 00:17:11.230 { 00:17:11.230 "name": "pt1", 00:17:11.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.230 "is_configured": true, 00:17:11.230 "data_offset": 256, 00:17:11.230 "data_size": 7936 00:17:11.230 }, 00:17:11.230 { 00:17:11.230 "name": null, 00:17:11.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.230 "is_configured": false, 00:17:11.230 "data_offset": 256, 00:17:11.230 "data_size": 7936 00:17:11.230 } 00:17:11.230 ] 00:17:11.230 }' 00:17:11.230 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.230 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.490 [2024-12-12 09:30:45.451934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.490 [2024-12-12 09:30:45.452125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.490 [2024-12-12 09:30:45.452189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:11.490 [2024-12-12 09:30:45.452229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.490 [2024-12-12 09:30:45.452797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.490 [2024-12-12 09:30:45.452869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.490 [2024-12-12 09:30:45.453015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:11.490 [2024-12-12 09:30:45.453079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.490 [2024-12-12 09:30:45.453251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:11.490 [2024-12-12 09:30:45.453293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:11.490 [2024-12-12 09:30:45.453592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:11.490 [2024-12-12 09:30:45.453801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:11.490 [2024-12-12 09:30:45.453840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:11.490 [2024-12-12 09:30:45.454039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.490 pt2 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.490 "name": "raid_bdev1", 00:17:11.490 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:11.490 "strip_size_kb": 0, 00:17:11.490 "state": "online", 00:17:11.490 "raid_level": "raid1", 00:17:11.490 "superblock": true, 00:17:11.490 "num_base_bdevs": 2, 00:17:11.490 "num_base_bdevs_discovered": 2, 00:17:11.490 "num_base_bdevs_operational": 2, 00:17:11.490 "base_bdevs_list": [ 00:17:11.490 { 00:17:11.490 "name": "pt1", 00:17:11.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.490 "is_configured": true, 00:17:11.490 "data_offset": 256, 00:17:11.490 "data_size": 7936 00:17:11.490 }, 00:17:11.490 { 00:17:11.490 "name": "pt2", 00:17:11.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.490 "is_configured": true, 00:17:11.490 "data_offset": 256, 00:17:11.490 "data_size": 7936 00:17:11.490 } 00:17:11.490 ] 00:17:11.490 }' 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.490 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:12.061 [2024-12-12 09:30:45.968061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.061 09:30:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.061 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:12.061 "name": "raid_bdev1", 00:17:12.061 "aliases": [ 00:17:12.061 "44dfb033-7ce4-470a-98d0-6add39618239" 00:17:12.061 ], 00:17:12.061 "product_name": "Raid Volume", 00:17:12.061 "block_size": 4096, 00:17:12.061 "num_blocks": 7936, 00:17:12.061 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:12.061 "assigned_rate_limits": { 00:17:12.061 "rw_ios_per_sec": 0, 00:17:12.061 "rw_mbytes_per_sec": 0, 00:17:12.061 "r_mbytes_per_sec": 0, 00:17:12.061 "w_mbytes_per_sec": 0 00:17:12.061 }, 00:17:12.061 "claimed": false, 00:17:12.061 "zoned": false, 00:17:12.061 "supported_io_types": { 00:17:12.061 "read": true, 00:17:12.061 "write": true, 00:17:12.061 "unmap": false, 00:17:12.061 "flush": false, 00:17:12.061 "reset": true, 00:17:12.061 "nvme_admin": false, 00:17:12.061 "nvme_io": false, 00:17:12.061 "nvme_io_md": false, 00:17:12.061 "write_zeroes": true, 00:17:12.061 "zcopy": false, 00:17:12.061 "get_zone_info": false, 00:17:12.061 "zone_management": false, 00:17:12.061 "zone_append": false, 00:17:12.061 "compare": false, 00:17:12.061 "compare_and_write": false, 00:17:12.061 "abort": false, 00:17:12.061 "seek_hole": false, 00:17:12.061 "seek_data": false, 00:17:12.061 "copy": false, 00:17:12.061 "nvme_iov_md": false 00:17:12.061 }, 00:17:12.061 "memory_domains": [ 00:17:12.061 { 00:17:12.061 "dma_device_id": "system", 00:17:12.061 "dma_device_type": 1 00:17:12.061 }, 00:17:12.061 { 00:17:12.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.061 "dma_device_type": 2 00:17:12.061 }, 00:17:12.061 { 00:17:12.061 "dma_device_id": "system", 00:17:12.061 "dma_device_type": 1 00:17:12.061 }, 00:17:12.061 { 00:17:12.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.061 "dma_device_type": 2 00:17:12.061 } 00:17:12.061 ], 00:17:12.061 "driver_specific": { 00:17:12.061 "raid": { 00:17:12.061 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:12.061 "strip_size_kb": 0, 00:17:12.061 "state": "online", 00:17:12.061 "raid_level": "raid1", 00:17:12.061 "superblock": true, 00:17:12.061 "num_base_bdevs": 2, 00:17:12.061 "num_base_bdevs_discovered": 2, 00:17:12.061 "num_base_bdevs_operational": 2, 00:17:12.061 "base_bdevs_list": [ 00:17:12.061 { 00:17:12.061 "name": "pt1", 00:17:12.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.061 "is_configured": true, 00:17:12.061 "data_offset": 256, 00:17:12.061 "data_size": 7936 00:17:12.061 }, 00:17:12.061 { 00:17:12.061 "name": "pt2", 00:17:12.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.061 "is_configured": true, 00:17:12.061 "data_offset": 256, 00:17:12.061 "data_size": 7936 00:17:12.061 } 00:17:12.061 ] 00:17:12.061 } 00:17:12.061 } 00:17:12.061 }' 00:17:12.061 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.061 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:12.061 pt2' 00:17:12.061 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.321 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:12.321 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.321 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.322 [2024-12-12 09:30:46.187664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 44dfb033-7ce4-470a-98d0-6add39618239 '!=' 44dfb033-7ce4-470a-98d0-6add39618239 ']' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.322 [2024-12-12 09:30:46.231334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.322 "name": "raid_bdev1", 00:17:12.322 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:12.322 "strip_size_kb": 0, 00:17:12.322 "state": "online", 00:17:12.322 "raid_level": "raid1", 00:17:12.322 "superblock": true, 00:17:12.322 "num_base_bdevs": 2, 00:17:12.322 "num_base_bdevs_discovered": 1, 00:17:12.322 "num_base_bdevs_operational": 1, 00:17:12.322 "base_bdevs_list": [ 00:17:12.322 { 00:17:12.322 "name": null, 00:17:12.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.322 "is_configured": false, 00:17:12.322 "data_offset": 0, 00:17:12.322 "data_size": 7936 00:17:12.322 }, 00:17:12.322 { 00:17:12.322 "name": "pt2", 00:17:12.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.322 "is_configured": true, 00:17:12.322 "data_offset": 256, 00:17:12.322 "data_size": 7936 00:17:12.322 } 00:17:12.322 ] 00:17:12.322 }' 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.322 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.891 [2024-12-12 09:30:46.634652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.891 [2024-12-12 09:30:46.634779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.891 [2024-12-12 09:30:46.634914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.891 [2024-12-12 09:30:46.634999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.891 [2024-12-12 09:30:46.635052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:12.891 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.892 [2024-12-12 09:30:46.706479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.892 [2024-12-12 09:30:46.706621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.892 [2024-12-12 09:30:46.706660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:12.892 [2024-12-12 09:30:46.706701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.892 [2024-12-12 09:30:46.709397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.892 [2024-12-12 09:30:46.709479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.892 [2024-12-12 09:30:46.709603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:12.892 [2024-12-12 09:30:46.709704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.892 [2024-12-12 09:30:46.709882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:12.892 [2024-12-12 09:30:46.709924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.892 [2024-12-12 09:30:46.710210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:12.892 [2024-12-12 09:30:46.710426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:12.892 pt2 00:17:12.892 [2024-12-12 09:30:46.710467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:12.892 [2024-12-12 09:30:46.710673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.892 "name": "raid_bdev1", 00:17:12.892 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:12.892 "strip_size_kb": 0, 00:17:12.892 "state": "online", 00:17:12.892 "raid_level": "raid1", 00:17:12.892 "superblock": true, 00:17:12.892 "num_base_bdevs": 2, 00:17:12.892 "num_base_bdevs_discovered": 1, 00:17:12.892 "num_base_bdevs_operational": 1, 00:17:12.892 "base_bdevs_list": [ 00:17:12.892 { 00:17:12.892 "name": null, 00:17:12.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.892 "is_configured": false, 00:17:12.892 "data_offset": 256, 00:17:12.892 "data_size": 7936 00:17:12.892 }, 00:17:12.892 { 00:17:12.892 "name": "pt2", 00:17:12.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.892 "is_configured": true, 00:17:12.892 "data_offset": 256, 00:17:12.892 "data_size": 7936 00:17:12.892 } 00:17:12.892 ] 00:17:12.892 }' 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.892 09:30:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.461 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.461 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.461 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.461 [2024-12-12 09:30:47.189818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.461 [2024-12-12 09:30:47.189944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.461 [2024-12-12 09:30:47.190082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.461 [2024-12-12 09:30:47.190164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.461 [2024-12-12 09:30:47.190216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:13.461 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.461 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.462 [2024-12-12 09:30:47.253728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.462 [2024-12-12 09:30:47.253866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.462 [2024-12-12 09:30:47.253918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:13.462 [2024-12-12 09:30:47.253952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.462 [2024-12-12 09:30:47.256702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.462 [2024-12-12 09:30:47.256775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.462 pt1 00:17:13.462 [2024-12-12 09:30:47.256901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:13.462 [2024-12-12 09:30:47.256969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.462 [2024-12-12 09:30:47.257153] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:13.462 [2024-12-12 09:30:47.257167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.462 [2024-12-12 09:30:47.257185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:13.462 [2024-12-12 09:30:47.257257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.462 [2024-12-12 09:30:47.257342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:13.462 [2024-12-12 09:30:47.257351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.462 [2024-12-12 09:30:47.257619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:13.462 [2024-12-12 09:30:47.257802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:13.462 [2024-12-12 09:30:47.257817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:13.462 [2024-12-12 09:30:47.258040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.462 "name": "raid_bdev1", 00:17:13.462 "uuid": "44dfb033-7ce4-470a-98d0-6add39618239", 00:17:13.462 "strip_size_kb": 0, 00:17:13.462 "state": "online", 00:17:13.462 "raid_level": "raid1", 00:17:13.462 "superblock": true, 00:17:13.462 "num_base_bdevs": 2, 00:17:13.462 "num_base_bdevs_discovered": 1, 00:17:13.462 "num_base_bdevs_operational": 1, 00:17:13.462 "base_bdevs_list": [ 00:17:13.462 { 00:17:13.462 "name": null, 00:17:13.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.462 "is_configured": false, 00:17:13.462 "data_offset": 256, 00:17:13.462 "data_size": 7936 00:17:13.462 }, 00:17:13.462 { 00:17:13.462 "name": "pt2", 00:17:13.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.462 "is_configured": true, 00:17:13.462 "data_offset": 256, 00:17:13.462 "data_size": 7936 00:17:13.462 } 00:17:13.462 ] 00:17:13.462 }' 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.462 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.722 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.982 [2024-12-12 09:30:47.745357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 44dfb033-7ce4-470a-98d0-6add39618239 '!=' 44dfb033-7ce4-470a-98d0-6add39618239 ']' 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87336 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87336 ']' 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87336 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87336 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.982 killing process with pid 87336 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87336' 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87336 00:17:13.982 [2024-12-12 09:30:47.809264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.982 [2024-12-12 09:30:47.809369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.982 [2024-12-12 09:30:47.809427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.982 09:30:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87336 00:17:13.982 [2024-12-12 09:30:47.809443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:14.242 [2024-12-12 09:30:48.037519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.624 09:30:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:15.624 00:17:15.624 real 0m6.434s 00:17:15.624 user 0m9.544s 00:17:15.624 sys 0m1.184s 00:17:15.624 09:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.624 ************************************ 00:17:15.624 END TEST raid_superblock_test_4k 00:17:15.624 ************************************ 00:17:15.624 09:30:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 09:30:49 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:15.624 09:30:49 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:15.624 09:30:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:15.624 09:30:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.624 09:30:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 ************************************ 00:17:15.624 START TEST raid_rebuild_test_sb_4k 00:17:15.624 ************************************ 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87665 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87665 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87665 ']' 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.624 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.624 Zero copy mechanism will not be used. 00:17:15.624 [2024-12-12 09:30:49.547770] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:15.624 [2024-12-12 09:30:49.547933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87665 ] 00:17:15.892 [2024-12-12 09:30:49.743782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.892 [2024-12-12 09:30:49.905329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.151 [2024-12-12 09:30:50.170915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.151 [2024-12-12 09:30:50.171020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.420 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 BaseBdev1_malloc 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 [2024-12-12 09:30:50.469496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:16.690 [2024-12-12 09:30:50.469671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.690 [2024-12-12 09:30:50.469719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:16.690 [2024-12-12 09:30:50.469756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.690 [2024-12-12 09:30:50.472373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.690 [2024-12-12 09:30:50.472471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:16.690 BaseBdev1 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 BaseBdev2_malloc 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 [2024-12-12 09:30:50.534766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:16.690 [2024-12-12 09:30:50.534934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.690 [2024-12-12 09:30:50.534989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.690 [2024-12-12 09:30:50.535033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.690 [2024-12-12 09:30:50.537944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.690 [2024-12-12 09:30:50.538040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:16.690 BaseBdev2 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 spare_malloc 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 spare_delay 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 [2024-12-12 09:30:50.628245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.690 [2024-12-12 09:30:50.628419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.690 [2024-12-12 09:30:50.628469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:16.690 [2024-12-12 09:30:50.628510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.690 [2024-12-12 09:30:50.631275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.690 [2024-12-12 09:30:50.631364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.690 spare 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.690 [2024-12-12 09:30:50.640345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.690 [2024-12-12 09:30:50.642470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.690 [2024-12-12 09:30:50.642681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:16.690 [2024-12-12 09:30:50.642697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:16.690 [2024-12-12 09:30:50.642960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:16.690 [2024-12-12 09:30:50.643164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:16.690 [2024-12-12 09:30:50.643173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:16.690 [2024-12-12 09:30:50.643341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.690 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.691 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.691 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.691 "name": "raid_bdev1", 00:17:16.691 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:16.691 "strip_size_kb": 0, 00:17:16.691 "state": "online", 00:17:16.691 "raid_level": "raid1", 00:17:16.691 "superblock": true, 00:17:16.691 "num_base_bdevs": 2, 00:17:16.691 "num_base_bdevs_discovered": 2, 00:17:16.691 "num_base_bdevs_operational": 2, 00:17:16.691 "base_bdevs_list": [ 00:17:16.691 { 00:17:16.691 "name": "BaseBdev1", 00:17:16.691 "uuid": "68b327bb-0b62-5d95-8af6-ff36ced6c8a7", 00:17:16.691 "is_configured": true, 00:17:16.691 "data_offset": 256, 00:17:16.691 "data_size": 7936 00:17:16.691 }, 00:17:16.691 { 00:17:16.691 "name": "BaseBdev2", 00:17:16.691 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:16.691 "is_configured": true, 00:17:16.691 "data_offset": 256, 00:17:16.691 "data_size": 7936 00:17:16.691 } 00:17:16.691 ] 00:17:16.691 }' 00:17:16.691 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.691 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.260 [2024-12-12 09:30:51.128295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.260 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:17.520 [2024-12-12 09:30:51.408016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:17.520 /dev/nbd0 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.520 1+0 records in 00:17:17.520 1+0 records out 00:17:17.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307351 s, 13.3 MB/s 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:17.520 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:18.459 7936+0 records in 00:17:18.459 7936+0 records out 00:17:18.459 32505856 bytes (33 MB, 31 MiB) copied, 0.670377 s, 48.5 MB/s 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.459 [2024-12-12 09:30:52.374396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.459 [2024-12-12 09:30:52.390691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.459 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.459 "name": "raid_bdev1", 00:17:18.460 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:18.460 "strip_size_kb": 0, 00:17:18.460 "state": "online", 00:17:18.460 "raid_level": "raid1", 00:17:18.460 "superblock": true, 00:17:18.460 "num_base_bdevs": 2, 00:17:18.460 "num_base_bdevs_discovered": 1, 00:17:18.460 "num_base_bdevs_operational": 1, 00:17:18.460 "base_bdevs_list": [ 00:17:18.460 { 00:17:18.460 "name": null, 00:17:18.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.460 "is_configured": false, 00:17:18.460 "data_offset": 0, 00:17:18.460 "data_size": 7936 00:17:18.460 }, 00:17:18.460 { 00:17:18.460 "name": "BaseBdev2", 00:17:18.460 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:18.460 "is_configured": true, 00:17:18.460 "data_offset": 256, 00:17:18.460 "data_size": 7936 00:17:18.460 } 00:17:18.460 ] 00:17:18.460 }' 00:17:18.460 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.460 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.028 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.028 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.028 [2024-12-12 09:30:52.830029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.028 [2024-12-12 09:30:52.850710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:19.028 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.028 09:30:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:19.028 [2024-12-12 09:30:52.853079] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.969 "name": "raid_bdev1", 00:17:19.969 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:19.969 "strip_size_kb": 0, 00:17:19.969 "state": "online", 00:17:19.969 "raid_level": "raid1", 00:17:19.969 "superblock": true, 00:17:19.969 "num_base_bdevs": 2, 00:17:19.969 "num_base_bdevs_discovered": 2, 00:17:19.969 "num_base_bdevs_operational": 2, 00:17:19.969 "process": { 00:17:19.969 "type": "rebuild", 00:17:19.969 "target": "spare", 00:17:19.969 "progress": { 00:17:19.969 "blocks": 2560, 00:17:19.969 "percent": 32 00:17:19.969 } 00:17:19.969 }, 00:17:19.969 "base_bdevs_list": [ 00:17:19.969 { 00:17:19.969 "name": "spare", 00:17:19.969 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 256, 00:17:19.969 "data_size": 7936 00:17:19.969 }, 00:17:19.969 { 00:17:19.969 "name": "BaseBdev2", 00:17:19.969 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 256, 00:17:19.969 "data_size": 7936 00:17:19.969 } 00:17:19.969 ] 00:17:19.969 }' 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.969 09:30:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.228 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.228 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:20.228 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.228 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.228 [2024-12-12 09:30:54.017436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.228 [2024-12-12 09:30:54.063256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.228 [2024-12-12 09:30:54.063429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.228 [2024-12-12 09:30:54.063481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.229 [2024-12-12 09:30:54.063510] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.229 "name": "raid_bdev1", 00:17:20.229 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:20.229 "strip_size_kb": 0, 00:17:20.229 "state": "online", 00:17:20.229 "raid_level": "raid1", 00:17:20.229 "superblock": true, 00:17:20.229 "num_base_bdevs": 2, 00:17:20.229 "num_base_bdevs_discovered": 1, 00:17:20.229 "num_base_bdevs_operational": 1, 00:17:20.229 "base_bdevs_list": [ 00:17:20.229 { 00:17:20.229 "name": null, 00:17:20.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.229 "is_configured": false, 00:17:20.229 "data_offset": 0, 00:17:20.229 "data_size": 7936 00:17:20.229 }, 00:17:20.229 { 00:17:20.229 "name": "BaseBdev2", 00:17:20.229 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:20.229 "is_configured": true, 00:17:20.229 "data_offset": 256, 00:17:20.229 "data_size": 7936 00:17:20.229 } 00:17:20.229 ] 00:17:20.229 }' 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.229 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.797 "name": "raid_bdev1", 00:17:20.797 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:20.797 "strip_size_kb": 0, 00:17:20.797 "state": "online", 00:17:20.797 "raid_level": "raid1", 00:17:20.797 "superblock": true, 00:17:20.797 "num_base_bdevs": 2, 00:17:20.797 "num_base_bdevs_discovered": 1, 00:17:20.797 "num_base_bdevs_operational": 1, 00:17:20.797 "base_bdevs_list": [ 00:17:20.797 { 00:17:20.797 "name": null, 00:17:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.797 "is_configured": false, 00:17:20.797 "data_offset": 0, 00:17:20.797 "data_size": 7936 00:17:20.797 }, 00:17:20.797 { 00:17:20.797 "name": "BaseBdev2", 00:17:20.797 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:20.797 "is_configured": true, 00:17:20.797 "data_offset": 256, 00:17:20.797 "data_size": 7936 00:17:20.797 } 00:17:20.797 ] 00:17:20.797 }' 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.797 [2024-12-12 09:30:54.723940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.797 [2024-12-12 09:30:54.742584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.797 09:30:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:20.797 [2024-12-12 09:30:54.745051] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.735 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.994 "name": "raid_bdev1", 00:17:21.994 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:21.994 "strip_size_kb": 0, 00:17:21.994 "state": "online", 00:17:21.994 "raid_level": "raid1", 00:17:21.994 "superblock": true, 00:17:21.994 "num_base_bdevs": 2, 00:17:21.994 "num_base_bdevs_discovered": 2, 00:17:21.994 "num_base_bdevs_operational": 2, 00:17:21.994 "process": { 00:17:21.994 "type": "rebuild", 00:17:21.994 "target": "spare", 00:17:21.994 "progress": { 00:17:21.994 "blocks": 2560, 00:17:21.994 "percent": 32 00:17:21.994 } 00:17:21.994 }, 00:17:21.994 "base_bdevs_list": [ 00:17:21.994 { 00:17:21.994 "name": "spare", 00:17:21.994 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:21.994 "is_configured": true, 00:17:21.994 "data_offset": 256, 00:17:21.994 "data_size": 7936 00:17:21.994 }, 00:17:21.994 { 00:17:21.994 "name": "BaseBdev2", 00:17:21.994 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:21.994 "is_configured": true, 00:17:21.994 "data_offset": 256, 00:17:21.994 "data_size": 7936 00:17:21.994 } 00:17:21.994 ] 00:17:21.994 }' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:21.994 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=681 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.994 "name": "raid_bdev1", 00:17:21.994 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:21.994 "strip_size_kb": 0, 00:17:21.994 "state": "online", 00:17:21.994 "raid_level": "raid1", 00:17:21.994 "superblock": true, 00:17:21.994 "num_base_bdevs": 2, 00:17:21.994 "num_base_bdevs_discovered": 2, 00:17:21.994 "num_base_bdevs_operational": 2, 00:17:21.994 "process": { 00:17:21.994 "type": "rebuild", 00:17:21.994 "target": "spare", 00:17:21.994 "progress": { 00:17:21.994 "blocks": 2816, 00:17:21.994 "percent": 35 00:17:21.994 } 00:17:21.994 }, 00:17:21.994 "base_bdevs_list": [ 00:17:21.994 { 00:17:21.994 "name": "spare", 00:17:21.994 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:21.994 "is_configured": true, 00:17:21.994 "data_offset": 256, 00:17:21.994 "data_size": 7936 00:17:21.994 }, 00:17:21.994 { 00:17:21.994 "name": "BaseBdev2", 00:17:21.994 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:21.994 "is_configured": true, 00:17:21.994 "data_offset": 256, 00:17:21.994 "data_size": 7936 00:17:21.994 } 00:17:21.994 ] 00:17:21.994 }' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.994 09:30:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.254 09:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.254 09:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.192 "name": "raid_bdev1", 00:17:23.192 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:23.192 "strip_size_kb": 0, 00:17:23.192 "state": "online", 00:17:23.192 "raid_level": "raid1", 00:17:23.192 "superblock": true, 00:17:23.192 "num_base_bdevs": 2, 00:17:23.192 "num_base_bdevs_discovered": 2, 00:17:23.192 "num_base_bdevs_operational": 2, 00:17:23.192 "process": { 00:17:23.192 "type": "rebuild", 00:17:23.192 "target": "spare", 00:17:23.192 "progress": { 00:17:23.192 "blocks": 5888, 00:17:23.192 "percent": 74 00:17:23.192 } 00:17:23.192 }, 00:17:23.192 "base_bdevs_list": [ 00:17:23.192 { 00:17:23.192 "name": "spare", 00:17:23.192 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:23.192 "is_configured": true, 00:17:23.192 "data_offset": 256, 00:17:23.192 "data_size": 7936 00:17:23.192 }, 00:17:23.192 { 00:17:23.192 "name": "BaseBdev2", 00:17:23.192 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:23.192 "is_configured": true, 00:17:23.192 "data_offset": 256, 00:17:23.192 "data_size": 7936 00:17:23.192 } 00:17:23.192 ] 00:17:23.192 }' 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.192 09:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.133 [2024-12-12 09:30:57.871086] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:24.133 [2024-12-12 09:30:57.871276] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:24.133 [2024-12-12 09:30:57.871458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.393 "name": "raid_bdev1", 00:17:24.393 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:24.393 "strip_size_kb": 0, 00:17:24.393 "state": "online", 00:17:24.393 "raid_level": "raid1", 00:17:24.393 "superblock": true, 00:17:24.393 "num_base_bdevs": 2, 00:17:24.393 "num_base_bdevs_discovered": 2, 00:17:24.393 "num_base_bdevs_operational": 2, 00:17:24.393 "base_bdevs_list": [ 00:17:24.393 { 00:17:24.393 "name": "spare", 00:17:24.393 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:24.393 "is_configured": true, 00:17:24.393 "data_offset": 256, 00:17:24.393 "data_size": 7936 00:17:24.393 }, 00:17:24.393 { 00:17:24.393 "name": "BaseBdev2", 00:17:24.393 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:24.393 "is_configured": true, 00:17:24.393 "data_offset": 256, 00:17:24.393 "data_size": 7936 00:17:24.393 } 00:17:24.393 ] 00:17:24.393 }' 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:24.393 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.394 "name": "raid_bdev1", 00:17:24.394 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:24.394 "strip_size_kb": 0, 00:17:24.394 "state": "online", 00:17:24.394 "raid_level": "raid1", 00:17:24.394 "superblock": true, 00:17:24.394 "num_base_bdevs": 2, 00:17:24.394 "num_base_bdevs_discovered": 2, 00:17:24.394 "num_base_bdevs_operational": 2, 00:17:24.394 "base_bdevs_list": [ 00:17:24.394 { 00:17:24.394 "name": "spare", 00:17:24.394 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:24.394 "is_configured": true, 00:17:24.394 "data_offset": 256, 00:17:24.394 "data_size": 7936 00:17:24.394 }, 00:17:24.394 { 00:17:24.394 "name": "BaseBdev2", 00:17:24.394 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:24.394 "is_configured": true, 00:17:24.394 "data_offset": 256, 00:17:24.394 "data_size": 7936 00:17:24.394 } 00:17:24.394 ] 00:17:24.394 }' 00:17:24.394 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.653 "name": "raid_bdev1", 00:17:24.653 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:24.653 "strip_size_kb": 0, 00:17:24.653 "state": "online", 00:17:24.653 "raid_level": "raid1", 00:17:24.653 "superblock": true, 00:17:24.653 "num_base_bdevs": 2, 00:17:24.653 "num_base_bdevs_discovered": 2, 00:17:24.653 "num_base_bdevs_operational": 2, 00:17:24.653 "base_bdevs_list": [ 00:17:24.653 { 00:17:24.653 "name": "spare", 00:17:24.653 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:24.653 "is_configured": true, 00:17:24.653 "data_offset": 256, 00:17:24.653 "data_size": 7936 00:17:24.653 }, 00:17:24.653 { 00:17:24.653 "name": "BaseBdev2", 00:17:24.653 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:24.653 "is_configured": true, 00:17:24.653 "data_offset": 256, 00:17:24.653 "data_size": 7936 00:17:24.653 } 00:17:24.653 ] 00:17:24.653 }' 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.653 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.915 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.915 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.915 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.915 [2024-12-12 09:30:58.933554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.915 [2024-12-12 09:30:58.933654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.915 [2024-12-12 09:30:58.933790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.915 [2024-12-12 09:30:58.933897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.915 [2024-12-12 09:30:58.933941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:25.176 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.177 09:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:25.177 /dev/nbd0 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.436 1+0 records in 00:17:25.436 1+0 records out 00:17:25.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440235 s, 9.3 MB/s 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:25.436 /dev/nbd1 00:17:25.436 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.696 1+0 records in 00:17:25.696 1+0 records out 00:17:25.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458698 s, 8.9 MB/s 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.696 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.956 09:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.214 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.215 [2024-12-12 09:31:00.179082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.215 [2024-12-12 09:31:00.179150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.215 [2024-12-12 09:31:00.179179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:26.215 [2024-12-12 09:31:00.179188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.215 [2024-12-12 09:31:00.181765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.215 [2024-12-12 09:31:00.181803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.215 [2024-12-12 09:31:00.181915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:26.215 [2024-12-12 09:31:00.181995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.215 [2024-12-12 09:31:00.182170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.215 spare 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.215 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.474 [2024-12-12 09:31:00.282091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:26.474 [2024-12-12 09:31:00.282123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:26.474 [2024-12-12 09:31:00.282445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:26.474 [2024-12-12 09:31:00.282666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:26.474 [2024-12-12 09:31:00.282682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:26.474 [2024-12-12 09:31:00.282924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.474 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.474 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:26.474 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.474 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.475 "name": "raid_bdev1", 00:17:26.475 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:26.475 "strip_size_kb": 0, 00:17:26.475 "state": "online", 00:17:26.475 "raid_level": "raid1", 00:17:26.475 "superblock": true, 00:17:26.475 "num_base_bdevs": 2, 00:17:26.475 "num_base_bdevs_discovered": 2, 00:17:26.475 "num_base_bdevs_operational": 2, 00:17:26.475 "base_bdevs_list": [ 00:17:26.475 { 00:17:26.475 "name": "spare", 00:17:26.475 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:26.475 "is_configured": true, 00:17:26.475 "data_offset": 256, 00:17:26.475 "data_size": 7936 00:17:26.475 }, 00:17:26.475 { 00:17:26.475 "name": "BaseBdev2", 00:17:26.475 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:26.475 "is_configured": true, 00:17:26.475 "data_offset": 256, 00:17:26.475 "data_size": 7936 00:17:26.475 } 00:17:26.475 ] 00:17:26.475 }' 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.475 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.735 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.993 "name": "raid_bdev1", 00:17:26.993 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:26.993 "strip_size_kb": 0, 00:17:26.993 "state": "online", 00:17:26.993 "raid_level": "raid1", 00:17:26.993 "superblock": true, 00:17:26.993 "num_base_bdevs": 2, 00:17:26.993 "num_base_bdevs_discovered": 2, 00:17:26.993 "num_base_bdevs_operational": 2, 00:17:26.993 "base_bdevs_list": [ 00:17:26.993 { 00:17:26.993 "name": "spare", 00:17:26.993 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:26.993 "is_configured": true, 00:17:26.993 "data_offset": 256, 00:17:26.993 "data_size": 7936 00:17:26.993 }, 00:17:26.993 { 00:17:26.993 "name": "BaseBdev2", 00:17:26.993 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:26.993 "is_configured": true, 00:17:26.993 "data_offset": 256, 00:17:26.993 "data_size": 7936 00:17:26.993 } 00:17:26.993 ] 00:17:26.993 }' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 [2024-12-12 09:31:00.905926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.993 "name": "raid_bdev1", 00:17:26.993 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:26.993 "strip_size_kb": 0, 00:17:26.993 "state": "online", 00:17:26.993 "raid_level": "raid1", 00:17:26.993 "superblock": true, 00:17:26.993 "num_base_bdevs": 2, 00:17:26.993 "num_base_bdevs_discovered": 1, 00:17:26.993 "num_base_bdevs_operational": 1, 00:17:26.993 "base_bdevs_list": [ 00:17:26.993 { 00:17:26.993 "name": null, 00:17:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.993 "is_configured": false, 00:17:26.993 "data_offset": 0, 00:17:26.993 "data_size": 7936 00:17:26.993 }, 00:17:26.993 { 00:17:26.993 "name": "BaseBdev2", 00:17:26.993 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:26.993 "is_configured": true, 00:17:26.993 "data_offset": 256, 00:17:26.993 "data_size": 7936 00:17:26.993 } 00:17:26.993 ] 00:17:26.993 }' 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.993 09:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.562 09:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.562 09:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.562 09:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.562 [2024-12-12 09:31:01.417181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.562 [2024-12-12 09:31:01.417457] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.562 [2024-12-12 09:31:01.417484] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:27.562 [2024-12-12 09:31:01.417527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.562 [2024-12-12 09:31:01.435291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:27.562 09:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.562 09:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:27.562 [2024-12-12 09:31:01.437550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.521 "name": "raid_bdev1", 00:17:28.521 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:28.521 "strip_size_kb": 0, 00:17:28.521 "state": "online", 00:17:28.521 "raid_level": "raid1", 00:17:28.521 "superblock": true, 00:17:28.521 "num_base_bdevs": 2, 00:17:28.521 "num_base_bdevs_discovered": 2, 00:17:28.521 "num_base_bdevs_operational": 2, 00:17:28.521 "process": { 00:17:28.521 "type": "rebuild", 00:17:28.521 "target": "spare", 00:17:28.521 "progress": { 00:17:28.521 "blocks": 2560, 00:17:28.521 "percent": 32 00:17:28.521 } 00:17:28.521 }, 00:17:28.521 "base_bdevs_list": [ 00:17:28.521 { 00:17:28.521 "name": "spare", 00:17:28.521 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:28.521 "is_configured": true, 00:17:28.521 "data_offset": 256, 00:17:28.521 "data_size": 7936 00:17:28.521 }, 00:17:28.521 { 00:17:28.521 "name": "BaseBdev2", 00:17:28.521 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:28.521 "is_configured": true, 00:17:28.521 "data_offset": 256, 00:17:28.521 "data_size": 7936 00:17:28.521 } 00:17:28.521 ] 00:17:28.521 }' 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.521 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.781 [2024-12-12 09:31:02.573609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.781 [2024-12-12 09:31:02.647349] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.781 [2024-12-12 09:31:02.647430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.781 [2024-12-12 09:31:02.647448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.781 [2024-12-12 09:31:02.647457] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.781 "name": "raid_bdev1", 00:17:28.781 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:28.781 "strip_size_kb": 0, 00:17:28.781 "state": "online", 00:17:28.781 "raid_level": "raid1", 00:17:28.781 "superblock": true, 00:17:28.781 "num_base_bdevs": 2, 00:17:28.781 "num_base_bdevs_discovered": 1, 00:17:28.781 "num_base_bdevs_operational": 1, 00:17:28.781 "base_bdevs_list": [ 00:17:28.781 { 00:17:28.781 "name": null, 00:17:28.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.781 "is_configured": false, 00:17:28.781 "data_offset": 0, 00:17:28.781 "data_size": 7936 00:17:28.781 }, 00:17:28.781 { 00:17:28.781 "name": "BaseBdev2", 00:17:28.781 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:28.781 "is_configured": true, 00:17:28.781 "data_offset": 256, 00:17:28.781 "data_size": 7936 00:17:28.781 } 00:17:28.781 ] 00:17:28.781 }' 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.781 09:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.351 09:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:29.351 09:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.351 09:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.351 [2024-12-12 09:31:03.130766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:29.351 [2024-12-12 09:31:03.130872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.351 [2024-12-12 09:31:03.130899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:29.351 [2024-12-12 09:31:03.130911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.351 [2024-12-12 09:31:03.131506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.351 [2024-12-12 09:31:03.131543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:29.351 [2024-12-12 09:31:03.131676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:29.351 [2024-12-12 09:31:03.131701] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.351 [2024-12-12 09:31:03.131714] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:29.351 [2024-12-12 09:31:03.131749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.351 [2024-12-12 09:31:03.149078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:29.351 spare 00:17:29.351 09:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.351 09:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:29.351 [2024-12-12 09:31:03.151235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.289 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.290 "name": "raid_bdev1", 00:17:30.290 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:30.290 "strip_size_kb": 0, 00:17:30.290 "state": "online", 00:17:30.290 "raid_level": "raid1", 00:17:30.290 "superblock": true, 00:17:30.290 "num_base_bdevs": 2, 00:17:30.290 "num_base_bdevs_discovered": 2, 00:17:30.290 "num_base_bdevs_operational": 2, 00:17:30.290 "process": { 00:17:30.290 "type": "rebuild", 00:17:30.290 "target": "spare", 00:17:30.290 "progress": { 00:17:30.290 "blocks": 2560, 00:17:30.290 "percent": 32 00:17:30.290 } 00:17:30.290 }, 00:17:30.290 "base_bdevs_list": [ 00:17:30.290 { 00:17:30.290 "name": "spare", 00:17:30.290 "uuid": "e98c3e13-cd57-55a7-bd73-f8c2256f5eb8", 00:17:30.290 "is_configured": true, 00:17:30.290 "data_offset": 256, 00:17:30.290 "data_size": 7936 00:17:30.290 }, 00:17:30.290 { 00:17:30.290 "name": "BaseBdev2", 00:17:30.290 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:30.290 "is_configured": true, 00:17:30.290 "data_offset": 256, 00:17:30.290 "data_size": 7936 00:17:30.290 } 00:17:30.290 ] 00:17:30.290 }' 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.290 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.549 [2024-12-12 09:31:04.315311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.549 [2024-12-12 09:31:04.361113] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.549 [2024-12-12 09:31:04.361209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.549 [2024-12-12 09:31:04.361229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.549 [2024-12-12 09:31:04.361237] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.549 "name": "raid_bdev1", 00:17:30.549 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:30.549 "strip_size_kb": 0, 00:17:30.549 "state": "online", 00:17:30.549 "raid_level": "raid1", 00:17:30.549 "superblock": true, 00:17:30.549 "num_base_bdevs": 2, 00:17:30.549 "num_base_bdevs_discovered": 1, 00:17:30.549 "num_base_bdevs_operational": 1, 00:17:30.549 "base_bdevs_list": [ 00:17:30.549 { 00:17:30.549 "name": null, 00:17:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.549 "is_configured": false, 00:17:30.549 "data_offset": 0, 00:17:30.549 "data_size": 7936 00:17:30.549 }, 00:17:30.549 { 00:17:30.549 "name": "BaseBdev2", 00:17:30.549 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:30.549 "is_configured": true, 00:17:30.549 "data_offset": 256, 00:17:30.549 "data_size": 7936 00:17:30.549 } 00:17:30.549 ] 00:17:30.549 }' 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.549 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.119 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.119 "name": "raid_bdev1", 00:17:31.119 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:31.119 "strip_size_kb": 0, 00:17:31.119 "state": "online", 00:17:31.119 "raid_level": "raid1", 00:17:31.119 "superblock": true, 00:17:31.119 "num_base_bdevs": 2, 00:17:31.119 "num_base_bdevs_discovered": 1, 00:17:31.119 "num_base_bdevs_operational": 1, 00:17:31.119 "base_bdevs_list": [ 00:17:31.119 { 00:17:31.119 "name": null, 00:17:31.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.120 "is_configured": false, 00:17:31.120 "data_offset": 0, 00:17:31.120 "data_size": 7936 00:17:31.120 }, 00:17:31.120 { 00:17:31.120 "name": "BaseBdev2", 00:17:31.120 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:31.120 "is_configured": true, 00:17:31.120 "data_offset": 256, 00:17:31.120 "data_size": 7936 00:17:31.120 } 00:17:31.120 ] 00:17:31.120 }' 00:17:31.120 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.120 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.120 09:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.120 [2024-12-12 09:31:05.047547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.120 [2024-12-12 09:31:05.047653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.120 [2024-12-12 09:31:05.047689] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:31.120 [2024-12-12 09:31:05.047713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.120 [2024-12-12 09:31:05.048440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.120 [2024-12-12 09:31:05.048473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.120 [2024-12-12 09:31:05.048595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:31.120 [2024-12-12 09:31:05.048621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:31.120 [2024-12-12 09:31:05.048636] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:31.120 [2024-12-12 09:31:05.048651] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:31.120 BaseBdev1 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.120 09:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.062 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.321 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.321 "name": "raid_bdev1", 00:17:32.321 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:32.321 "strip_size_kb": 0, 00:17:32.321 "state": "online", 00:17:32.321 "raid_level": "raid1", 00:17:32.321 "superblock": true, 00:17:32.321 "num_base_bdevs": 2, 00:17:32.321 "num_base_bdevs_discovered": 1, 00:17:32.322 "num_base_bdevs_operational": 1, 00:17:32.322 "base_bdevs_list": [ 00:17:32.322 { 00:17:32.322 "name": null, 00:17:32.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.322 "is_configured": false, 00:17:32.322 "data_offset": 0, 00:17:32.322 "data_size": 7936 00:17:32.322 }, 00:17:32.322 { 00:17:32.322 "name": "BaseBdev2", 00:17:32.322 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:32.322 "is_configured": true, 00:17:32.322 "data_offset": 256, 00:17:32.322 "data_size": 7936 00:17:32.322 } 00:17:32.322 ] 00:17:32.322 }' 00:17:32.322 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.322 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.582 "name": "raid_bdev1", 00:17:32.582 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:32.582 "strip_size_kb": 0, 00:17:32.582 "state": "online", 00:17:32.582 "raid_level": "raid1", 00:17:32.582 "superblock": true, 00:17:32.582 "num_base_bdevs": 2, 00:17:32.582 "num_base_bdevs_discovered": 1, 00:17:32.582 "num_base_bdevs_operational": 1, 00:17:32.582 "base_bdevs_list": [ 00:17:32.582 { 00:17:32.582 "name": null, 00:17:32.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.582 "is_configured": false, 00:17:32.582 "data_offset": 0, 00:17:32.582 "data_size": 7936 00:17:32.582 }, 00:17:32.582 { 00:17:32.582 "name": "BaseBdev2", 00:17:32.582 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:32.582 "is_configured": true, 00:17:32.582 "data_offset": 256, 00:17:32.582 "data_size": 7936 00:17:32.582 } 00:17:32.582 ] 00:17:32.582 }' 00:17:32.582 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.842 [2024-12-12 09:31:06.672889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.842 [2024-12-12 09:31:06.673156] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.842 [2024-12-12 09:31:06.673175] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.842 request: 00:17:32.842 { 00:17:32.842 "base_bdev": "BaseBdev1", 00:17:32.842 "raid_bdev": "raid_bdev1", 00:17:32.842 "method": "bdev_raid_add_base_bdev", 00:17:32.842 "req_id": 1 00:17:32.842 } 00:17:32.842 Got JSON-RPC error response 00:17:32.842 response: 00:17:32.842 { 00:17:32.842 "code": -22, 00:17:32.842 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:32.842 } 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.842 09:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.782 "name": "raid_bdev1", 00:17:33.782 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:33.782 "strip_size_kb": 0, 00:17:33.782 "state": "online", 00:17:33.782 "raid_level": "raid1", 00:17:33.782 "superblock": true, 00:17:33.782 "num_base_bdevs": 2, 00:17:33.782 "num_base_bdevs_discovered": 1, 00:17:33.782 "num_base_bdevs_operational": 1, 00:17:33.782 "base_bdevs_list": [ 00:17:33.782 { 00:17:33.782 "name": null, 00:17:33.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.782 "is_configured": false, 00:17:33.782 "data_offset": 0, 00:17:33.782 "data_size": 7936 00:17:33.782 }, 00:17:33.782 { 00:17:33.782 "name": "BaseBdev2", 00:17:33.782 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:33.782 "is_configured": true, 00:17:33.782 "data_offset": 256, 00:17:33.782 "data_size": 7936 00:17:33.782 } 00:17:33.782 ] 00:17:33.782 }' 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.782 09:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.350 "name": "raid_bdev1", 00:17:34.350 "uuid": "3e5b94cd-ea2b-4eec-9d07-20f26407f626", 00:17:34.350 "strip_size_kb": 0, 00:17:34.350 "state": "online", 00:17:34.350 "raid_level": "raid1", 00:17:34.350 "superblock": true, 00:17:34.350 "num_base_bdevs": 2, 00:17:34.350 "num_base_bdevs_discovered": 1, 00:17:34.350 "num_base_bdevs_operational": 1, 00:17:34.350 "base_bdevs_list": [ 00:17:34.350 { 00:17:34.350 "name": null, 00:17:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.350 "is_configured": false, 00:17:34.350 "data_offset": 0, 00:17:34.350 "data_size": 7936 00:17:34.350 }, 00:17:34.350 { 00:17:34.350 "name": "BaseBdev2", 00:17:34.350 "uuid": "5d5cf80d-4d8d-58c9-8684-f655974b5ef2", 00:17:34.350 "is_configured": true, 00:17:34.350 "data_offset": 256, 00:17:34.350 "data_size": 7936 00:17:34.350 } 00:17:34.350 ] 00:17:34.350 }' 00:17:34.350 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87665 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87665 ']' 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87665 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87665 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87665' 00:17:34.351 killing process with pid 87665 00:17:34.351 Received shutdown signal, test time was about 60.000000 seconds 00:17:34.351 00:17:34.351 Latency(us) 00:17:34.351 [2024-12-12T09:31:08.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.351 [2024-12-12T09:31:08.374Z] =================================================================================================================== 00:17:34.351 [2024-12-12T09:31:08.374Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87665 00:17:34.351 [2024-12-12 09:31:08.329157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.351 09:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87665 00:17:34.351 [2024-12-12 09:31:08.329336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.351 [2024-12-12 09:31:08.329399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.351 [2024-12-12 09:31:08.329424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:34.926 [2024-12-12 09:31:08.663900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.325 09:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:36.325 00:17:36.325 real 0m20.579s 00:17:36.325 user 0m26.658s 00:17:36.325 sys 0m2.968s 00:17:36.325 09:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.325 ************************************ 00:17:36.325 END TEST raid_rebuild_test_sb_4k 00:17:36.325 ************************************ 00:17:36.325 09:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.325 09:31:10 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:36.325 09:31:10 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:36.325 09:31:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:36.325 09:31:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.325 09:31:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.325 ************************************ 00:17:36.325 START TEST raid_state_function_test_sb_md_separate 00:17:36.325 ************************************ 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:36.325 Process raid pid: 88361 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88361 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88361' 00:17:36.325 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88361 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88361 ']' 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.326 09:31:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.326 [2024-12-12 09:31:10.198640] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:36.326 [2024-12-12 09:31:10.198947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.585 [2024-12-12 09:31:10.376132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.585 [2024-12-12 09:31:10.553272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.845 [2024-12-12 09:31:10.855745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.845 [2024-12-12 09:31:10.855970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.414 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.414 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:37.414 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.414 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.414 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.414 [2024-12-12 09:31:11.177400] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.414 [2024-12-12 09:31:11.177612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.414 [2024-12-12 09:31:11.177655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.415 [2024-12-12 09:31:11.177694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.415 "name": "Existed_Raid", 00:17:37.415 "uuid": "e14bc9c5-60b6-4af3-b308-45049507a581", 00:17:37.415 "strip_size_kb": 0, 00:17:37.415 "state": "configuring", 00:17:37.415 "raid_level": "raid1", 00:17:37.415 "superblock": true, 00:17:37.415 "num_base_bdevs": 2, 00:17:37.415 "num_base_bdevs_discovered": 0, 00:17:37.415 "num_base_bdevs_operational": 2, 00:17:37.415 "base_bdevs_list": [ 00:17:37.415 { 00:17:37.415 "name": "BaseBdev1", 00:17:37.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.415 "is_configured": false, 00:17:37.415 "data_offset": 0, 00:17:37.415 "data_size": 0 00:17:37.415 }, 00:17:37.415 { 00:17:37.415 "name": "BaseBdev2", 00:17:37.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.415 "is_configured": false, 00:17:37.415 "data_offset": 0, 00:17:37.415 "data_size": 0 00:17:37.415 } 00:17:37.415 ] 00:17:37.415 }' 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.415 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.675 [2024-12-12 09:31:11.664494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.675 [2024-12-12 09:31:11.664648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.675 [2024-12-12 09:31:11.676549] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.675 [2024-12-12 09:31:11.676713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.675 [2024-12-12 09:31:11.676756] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.675 [2024-12-12 09:31:11.676796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.675 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 [2024-12-12 09:31:11.737881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.936 BaseBdev1 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 [ 00:17:37.936 { 00:17:37.936 "name": "BaseBdev1", 00:17:37.936 "aliases": [ 00:17:37.936 "796091f7-620c-40ba-803c-77f9be7d963f" 00:17:37.936 ], 00:17:37.936 "product_name": "Malloc disk", 00:17:37.936 "block_size": 4096, 00:17:37.936 "num_blocks": 8192, 00:17:37.936 "uuid": "796091f7-620c-40ba-803c-77f9be7d963f", 00:17:37.936 "md_size": 32, 00:17:37.936 "md_interleave": false, 00:17:37.936 "dif_type": 0, 00:17:37.936 "assigned_rate_limits": { 00:17:37.936 "rw_ios_per_sec": 0, 00:17:37.936 "rw_mbytes_per_sec": 0, 00:17:37.936 "r_mbytes_per_sec": 0, 00:17:37.936 "w_mbytes_per_sec": 0 00:17:37.936 }, 00:17:37.936 "claimed": true, 00:17:37.936 "claim_type": "exclusive_write", 00:17:37.936 "zoned": false, 00:17:37.936 "supported_io_types": { 00:17:37.936 "read": true, 00:17:37.936 "write": true, 00:17:37.936 "unmap": true, 00:17:37.936 "flush": true, 00:17:37.936 "reset": true, 00:17:37.936 "nvme_admin": false, 00:17:37.936 "nvme_io": false, 00:17:37.936 "nvme_io_md": false, 00:17:37.936 "write_zeroes": true, 00:17:37.936 "zcopy": true, 00:17:37.936 "get_zone_info": false, 00:17:37.936 "zone_management": false, 00:17:37.936 "zone_append": false, 00:17:37.936 "compare": false, 00:17:37.936 "compare_and_write": false, 00:17:37.936 "abort": true, 00:17:37.936 "seek_hole": false, 00:17:37.936 "seek_data": false, 00:17:37.936 "copy": true, 00:17:37.936 "nvme_iov_md": false 00:17:37.936 }, 00:17:37.936 "memory_domains": [ 00:17:37.936 { 00:17:37.936 "dma_device_id": "system", 00:17:37.936 "dma_device_type": 1 00:17:37.936 }, 00:17:37.936 { 00:17:37.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.936 "dma_device_type": 2 00:17:37.936 } 00:17:37.936 ], 00:17:37.936 "driver_specific": {} 00:17:37.936 } 00:17:37.936 ] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.936 "name": "Existed_Raid", 00:17:37.936 "uuid": "49c1ecfa-4df0-4596-9722-1274b1bcad73", 00:17:37.936 "strip_size_kb": 0, 00:17:37.936 "state": "configuring", 00:17:37.936 "raid_level": "raid1", 00:17:37.936 "superblock": true, 00:17:37.936 "num_base_bdevs": 2, 00:17:37.936 "num_base_bdevs_discovered": 1, 00:17:37.936 "num_base_bdevs_operational": 2, 00:17:37.936 "base_bdevs_list": [ 00:17:37.936 { 00:17:37.936 "name": "BaseBdev1", 00:17:37.936 "uuid": "796091f7-620c-40ba-803c-77f9be7d963f", 00:17:37.936 "is_configured": true, 00:17:37.936 "data_offset": 256, 00:17:37.936 "data_size": 7936 00:17:37.936 }, 00:17:37.936 { 00:17:37.936 "name": "BaseBdev2", 00:17:37.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.936 "is_configured": false, 00:17:37.936 "data_offset": 0, 00:17:37.936 "data_size": 0 00:17:37.936 } 00:17:37.936 ] 00:17:37.936 }' 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.936 09:31:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 [2024-12-12 09:31:12.237186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.508 [2024-12-12 09:31:12.237358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 [2024-12-12 09:31:12.249312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.508 [2024-12-12 09:31:12.252102] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.508 [2024-12-12 09:31:12.252265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.508 "name": "Existed_Raid", 00:17:38.508 "uuid": "4cf4ac5c-692d-41c2-8f00-7313dbbce507", 00:17:38.508 "strip_size_kb": 0, 00:17:38.508 "state": "configuring", 00:17:38.508 "raid_level": "raid1", 00:17:38.508 "superblock": true, 00:17:38.508 "num_base_bdevs": 2, 00:17:38.508 "num_base_bdevs_discovered": 1, 00:17:38.508 "num_base_bdevs_operational": 2, 00:17:38.508 "base_bdevs_list": [ 00:17:38.508 { 00:17:38.508 "name": "BaseBdev1", 00:17:38.508 "uuid": "796091f7-620c-40ba-803c-77f9be7d963f", 00:17:38.508 "is_configured": true, 00:17:38.508 "data_offset": 256, 00:17:38.508 "data_size": 7936 00:17:38.508 }, 00:17:38.508 { 00:17:38.508 "name": "BaseBdev2", 00:17:38.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.508 "is_configured": false, 00:17:38.508 "data_offset": 0, 00:17:38.508 "data_size": 0 00:17:38.508 } 00:17:38.508 ] 00:17:38.508 }' 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.508 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.774 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:38.774 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.774 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.033 [2024-12-12 09:31:12.816681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.033 [2024-12-12 09:31:12.817192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:39.033 [2024-12-12 09:31:12.817289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.033 [2024-12-12 09:31:12.817452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:39.033 BaseBdev2 00:17:39.033 [2024-12-12 09:31:12.817689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:39.033 [2024-12-12 09:31:12.817713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:39.033 [2024-12-12 09:31:12.817844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.033 [ 00:17:39.033 { 00:17:39.033 "name": "BaseBdev2", 00:17:39.033 "aliases": [ 00:17:39.033 "e3de0ea1-fad9-46fd-8eb5-1e06c2ae0118" 00:17:39.033 ], 00:17:39.033 "product_name": "Malloc disk", 00:17:39.033 "block_size": 4096, 00:17:39.033 "num_blocks": 8192, 00:17:39.033 "uuid": "e3de0ea1-fad9-46fd-8eb5-1e06c2ae0118", 00:17:39.033 "md_size": 32, 00:17:39.033 "md_interleave": false, 00:17:39.033 "dif_type": 0, 00:17:39.033 "assigned_rate_limits": { 00:17:39.033 "rw_ios_per_sec": 0, 00:17:39.033 "rw_mbytes_per_sec": 0, 00:17:39.033 "r_mbytes_per_sec": 0, 00:17:39.033 "w_mbytes_per_sec": 0 00:17:39.033 }, 00:17:39.033 "claimed": true, 00:17:39.033 "claim_type": "exclusive_write", 00:17:39.033 "zoned": false, 00:17:39.033 "supported_io_types": { 00:17:39.033 "read": true, 00:17:39.033 "write": true, 00:17:39.033 "unmap": true, 00:17:39.033 "flush": true, 00:17:39.033 "reset": true, 00:17:39.033 "nvme_admin": false, 00:17:39.033 "nvme_io": false, 00:17:39.033 "nvme_io_md": false, 00:17:39.033 "write_zeroes": true, 00:17:39.033 "zcopy": true, 00:17:39.033 "get_zone_info": false, 00:17:39.033 "zone_management": false, 00:17:39.033 "zone_append": false, 00:17:39.033 "compare": false, 00:17:39.033 "compare_and_write": false, 00:17:39.033 "abort": true, 00:17:39.033 "seek_hole": false, 00:17:39.033 "seek_data": false, 00:17:39.033 "copy": true, 00:17:39.033 "nvme_iov_md": false 00:17:39.033 }, 00:17:39.033 "memory_domains": [ 00:17:39.033 { 00:17:39.033 "dma_device_id": "system", 00:17:39.033 "dma_device_type": 1 00:17:39.033 }, 00:17:39.033 { 00:17:39.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.033 "dma_device_type": 2 00:17:39.033 } 00:17:39.033 ], 00:17:39.033 "driver_specific": {} 00:17:39.033 } 00:17:39.033 ] 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.033 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.034 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.034 "name": "Existed_Raid", 00:17:39.034 "uuid": "4cf4ac5c-692d-41c2-8f00-7313dbbce507", 00:17:39.034 "strip_size_kb": 0, 00:17:39.034 "state": "online", 00:17:39.034 "raid_level": "raid1", 00:17:39.034 "superblock": true, 00:17:39.034 "num_base_bdevs": 2, 00:17:39.034 "num_base_bdevs_discovered": 2, 00:17:39.034 "num_base_bdevs_operational": 2, 00:17:39.034 "base_bdevs_list": [ 00:17:39.034 { 00:17:39.034 "name": "BaseBdev1", 00:17:39.034 "uuid": "796091f7-620c-40ba-803c-77f9be7d963f", 00:17:39.034 "is_configured": true, 00:17:39.034 "data_offset": 256, 00:17:39.034 "data_size": 7936 00:17:39.034 }, 00:17:39.034 { 00:17:39.034 "name": "BaseBdev2", 00:17:39.034 "uuid": "e3de0ea1-fad9-46fd-8eb5-1e06c2ae0118", 00:17:39.034 "is_configured": true, 00:17:39.034 "data_offset": 256, 00:17:39.034 "data_size": 7936 00:17:39.034 } 00:17:39.034 ] 00:17:39.034 }' 00:17:39.034 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.034 09:31:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 [2024-12-12 09:31:13.348502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.603 "name": "Existed_Raid", 00:17:39.603 "aliases": [ 00:17:39.603 "4cf4ac5c-692d-41c2-8f00-7313dbbce507" 00:17:39.603 ], 00:17:39.603 "product_name": "Raid Volume", 00:17:39.603 "block_size": 4096, 00:17:39.603 "num_blocks": 7936, 00:17:39.603 "uuid": "4cf4ac5c-692d-41c2-8f00-7313dbbce507", 00:17:39.603 "md_size": 32, 00:17:39.603 "md_interleave": false, 00:17:39.603 "dif_type": 0, 00:17:39.603 "assigned_rate_limits": { 00:17:39.603 "rw_ios_per_sec": 0, 00:17:39.603 "rw_mbytes_per_sec": 0, 00:17:39.603 "r_mbytes_per_sec": 0, 00:17:39.603 "w_mbytes_per_sec": 0 00:17:39.603 }, 00:17:39.603 "claimed": false, 00:17:39.603 "zoned": false, 00:17:39.603 "supported_io_types": { 00:17:39.603 "read": true, 00:17:39.603 "write": true, 00:17:39.603 "unmap": false, 00:17:39.603 "flush": false, 00:17:39.603 "reset": true, 00:17:39.603 "nvme_admin": false, 00:17:39.603 "nvme_io": false, 00:17:39.603 "nvme_io_md": false, 00:17:39.603 "write_zeroes": true, 00:17:39.603 "zcopy": false, 00:17:39.603 "get_zone_info": false, 00:17:39.603 "zone_management": false, 00:17:39.603 "zone_append": false, 00:17:39.603 "compare": false, 00:17:39.603 "compare_and_write": false, 00:17:39.603 "abort": false, 00:17:39.603 "seek_hole": false, 00:17:39.603 "seek_data": false, 00:17:39.603 "copy": false, 00:17:39.603 "nvme_iov_md": false 00:17:39.603 }, 00:17:39.603 "memory_domains": [ 00:17:39.603 { 00:17:39.603 "dma_device_id": "system", 00:17:39.603 "dma_device_type": 1 00:17:39.603 }, 00:17:39.603 { 00:17:39.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.603 "dma_device_type": 2 00:17:39.603 }, 00:17:39.603 { 00:17:39.603 "dma_device_id": "system", 00:17:39.603 "dma_device_type": 1 00:17:39.603 }, 00:17:39.603 { 00:17:39.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.603 "dma_device_type": 2 00:17:39.603 } 00:17:39.603 ], 00:17:39.603 "driver_specific": { 00:17:39.603 "raid": { 00:17:39.603 "uuid": "4cf4ac5c-692d-41c2-8f00-7313dbbce507", 00:17:39.603 "strip_size_kb": 0, 00:17:39.603 "state": "online", 00:17:39.603 "raid_level": "raid1", 00:17:39.603 "superblock": true, 00:17:39.603 "num_base_bdevs": 2, 00:17:39.603 "num_base_bdevs_discovered": 2, 00:17:39.603 "num_base_bdevs_operational": 2, 00:17:39.603 "base_bdevs_list": [ 00:17:39.603 { 00:17:39.603 "name": "BaseBdev1", 00:17:39.603 "uuid": "796091f7-620c-40ba-803c-77f9be7d963f", 00:17:39.603 "is_configured": true, 00:17:39.603 "data_offset": 256, 00:17:39.603 "data_size": 7936 00:17:39.603 }, 00:17:39.603 { 00:17:39.603 "name": "BaseBdev2", 00:17:39.603 "uuid": "e3de0ea1-fad9-46fd-8eb5-1e06c2ae0118", 00:17:39.603 "is_configured": true, 00:17:39.603 "data_offset": 256, 00:17:39.603 "data_size": 7936 00:17:39.603 } 00:17:39.603 ] 00:17:39.603 } 00:17:39.603 } 00:17:39.603 }' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:39.603 BaseBdev2' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.603 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.603 [2024-12-12 09:31:13.592047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.863 "name": "Existed_Raid", 00:17:39.863 "uuid": "4cf4ac5c-692d-41c2-8f00-7313dbbce507", 00:17:39.863 "strip_size_kb": 0, 00:17:39.863 "state": "online", 00:17:39.863 "raid_level": "raid1", 00:17:39.863 "superblock": true, 00:17:39.863 "num_base_bdevs": 2, 00:17:39.863 "num_base_bdevs_discovered": 1, 00:17:39.863 "num_base_bdevs_operational": 1, 00:17:39.863 "base_bdevs_list": [ 00:17:39.863 { 00:17:39.863 "name": null, 00:17:39.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.863 "is_configured": false, 00:17:39.863 "data_offset": 0, 00:17:39.863 "data_size": 7936 00:17:39.863 }, 00:17:39.863 { 00:17:39.863 "name": "BaseBdev2", 00:17:39.863 "uuid": "e3de0ea1-fad9-46fd-8eb5-1e06c2ae0118", 00:17:39.863 "is_configured": true, 00:17:39.863 "data_offset": 256, 00:17:39.863 "data_size": 7936 00:17:39.863 } 00:17:39.863 ] 00:17:39.863 }' 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.863 09:31:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.431 [2024-12-12 09:31:14.255234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.431 [2024-12-12 09:31:14.255511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.431 [2024-12-12 09:31:14.395919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.431 [2024-12-12 09:31:14.396137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.431 [2024-12-12 09:31:14.396186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88361 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88361 ']' 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88361 00:17:40.431 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88361 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88361' 00:17:40.690 killing process with pid 88361 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88361 00:17:40.690 [2024-12-12 09:31:14.486969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.690 09:31:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88361 00:17:40.690 [2024-12-12 09:31:14.510083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.070 09:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:42.070 00:17:42.070 real 0m5.917s 00:17:42.070 user 0m8.307s 00:17:42.070 sys 0m1.047s 00:17:42.070 09:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.070 09:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.070 ************************************ 00:17:42.070 END TEST raid_state_function_test_sb_md_separate 00:17:42.070 ************************************ 00:17:42.070 09:31:16 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:42.070 09:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:42.070 09:31:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.070 09:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.070 ************************************ 00:17:42.070 START TEST raid_superblock_test_md_separate 00:17:42.070 ************************************ 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88619 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88619 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88619 ']' 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.070 09:31:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.338 [2024-12-12 09:31:16.173987] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:42.338 [2024-12-12 09:31:16.174158] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88619 ] 00:17:42.603 [2024-12-12 09:31:16.362421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.603 [2024-12-12 09:31:16.528407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.862 [2024-12-12 09:31:16.823391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.862 [2024-12-12 09:31:16.823468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.121 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 malloc1 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 [2024-12-12 09:31:17.204301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:43.380 [2024-12-12 09:31:17.204542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.380 [2024-12-12 09:31:17.204601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:43.380 [2024-12-12 09:31:17.204639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.380 [2024-12-12 09:31:17.207435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.380 [2024-12-12 09:31:17.207563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:43.380 pt1 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 malloc2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 [2024-12-12 09:31:17.277880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.380 [2024-12-12 09:31:17.278149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.380 [2024-12-12 09:31:17.278208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:43.380 [2024-12-12 09:31:17.278249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.380 [2024-12-12 09:31:17.281049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.380 [2024-12-12 09:31:17.281177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.380 pt2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 [2024-12-12 09:31:17.290022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.380 [2024-12-12 09:31:17.292740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.380 [2024-12-12 09:31:17.293114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:43.380 [2024-12-12 09:31:17.293178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.380 [2024-12-12 09:31:17.293335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:43.380 [2024-12-12 09:31:17.293549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:43.380 [2024-12-12 09:31:17.293602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:43.380 [2024-12-12 09:31:17.293856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.380 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.380 "name": "raid_bdev1", 00:17:43.380 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:43.380 "strip_size_kb": 0, 00:17:43.380 "state": "online", 00:17:43.380 "raid_level": "raid1", 00:17:43.380 "superblock": true, 00:17:43.380 "num_base_bdevs": 2, 00:17:43.380 "num_base_bdevs_discovered": 2, 00:17:43.380 "num_base_bdevs_operational": 2, 00:17:43.380 "base_bdevs_list": [ 00:17:43.380 { 00:17:43.380 "name": "pt1", 00:17:43.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.380 "is_configured": true, 00:17:43.380 "data_offset": 256, 00:17:43.380 "data_size": 7936 00:17:43.380 }, 00:17:43.380 { 00:17:43.381 "name": "pt2", 00:17:43.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.381 "is_configured": true, 00:17:43.381 "data_offset": 256, 00:17:43.381 "data_size": 7936 00:17:43.381 } 00:17:43.381 ] 00:17:43.381 }' 00:17:43.381 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.381 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.948 [2024-12-12 09:31:17.753536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.948 "name": "raid_bdev1", 00:17:43.948 "aliases": [ 00:17:43.948 "31786b5d-8a5b-4405-8f12-213969e52989" 00:17:43.948 ], 00:17:43.948 "product_name": "Raid Volume", 00:17:43.948 "block_size": 4096, 00:17:43.948 "num_blocks": 7936, 00:17:43.948 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:43.948 "md_size": 32, 00:17:43.948 "md_interleave": false, 00:17:43.948 "dif_type": 0, 00:17:43.948 "assigned_rate_limits": { 00:17:43.948 "rw_ios_per_sec": 0, 00:17:43.948 "rw_mbytes_per_sec": 0, 00:17:43.948 "r_mbytes_per_sec": 0, 00:17:43.948 "w_mbytes_per_sec": 0 00:17:43.948 }, 00:17:43.948 "claimed": false, 00:17:43.948 "zoned": false, 00:17:43.948 "supported_io_types": { 00:17:43.948 "read": true, 00:17:43.948 "write": true, 00:17:43.948 "unmap": false, 00:17:43.948 "flush": false, 00:17:43.948 "reset": true, 00:17:43.948 "nvme_admin": false, 00:17:43.948 "nvme_io": false, 00:17:43.948 "nvme_io_md": false, 00:17:43.948 "write_zeroes": true, 00:17:43.948 "zcopy": false, 00:17:43.948 "get_zone_info": false, 00:17:43.948 "zone_management": false, 00:17:43.948 "zone_append": false, 00:17:43.948 "compare": false, 00:17:43.948 "compare_and_write": false, 00:17:43.948 "abort": false, 00:17:43.948 "seek_hole": false, 00:17:43.948 "seek_data": false, 00:17:43.948 "copy": false, 00:17:43.948 "nvme_iov_md": false 00:17:43.948 }, 00:17:43.948 "memory_domains": [ 00:17:43.948 { 00:17:43.948 "dma_device_id": "system", 00:17:43.948 "dma_device_type": 1 00:17:43.948 }, 00:17:43.948 { 00:17:43.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.948 "dma_device_type": 2 00:17:43.948 }, 00:17:43.948 { 00:17:43.948 "dma_device_id": "system", 00:17:43.948 "dma_device_type": 1 00:17:43.948 }, 00:17:43.948 { 00:17:43.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.948 "dma_device_type": 2 00:17:43.948 } 00:17:43.948 ], 00:17:43.948 "driver_specific": { 00:17:43.948 "raid": { 00:17:43.948 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:43.948 "strip_size_kb": 0, 00:17:43.948 "state": "online", 00:17:43.948 "raid_level": "raid1", 00:17:43.948 "superblock": true, 00:17:43.948 "num_base_bdevs": 2, 00:17:43.948 "num_base_bdevs_discovered": 2, 00:17:43.948 "num_base_bdevs_operational": 2, 00:17:43.948 "base_bdevs_list": [ 00:17:43.948 { 00:17:43.948 "name": "pt1", 00:17:43.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.948 "is_configured": true, 00:17:43.948 "data_offset": 256, 00:17:43.948 "data_size": 7936 00:17:43.948 }, 00:17:43.948 { 00:17:43.948 "name": "pt2", 00:17:43.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.948 "is_configured": true, 00:17:43.948 "data_offset": 256, 00:17:43.948 "data_size": 7936 00:17:43.948 } 00:17:43.948 ] 00:17:43.948 } 00:17:43.948 } 00:17:43.948 }' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:43.948 pt2' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:43.948 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:43.949 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:43.949 09:31:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.949 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.949 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 [2024-12-12 09:31:17.973164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.207 09:31:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=31786b5d-8a5b-4405-8f12-213969e52989 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 31786b5d-8a5b-4405-8f12-213969e52989 ']' 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 [2024-12-12 09:31:18.016693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.207 [2024-12-12 09:31:18.016854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.207 [2024-12-12 09:31:18.017042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.207 [2024-12-12 09:31:18.017195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.207 [2024-12-12 09:31:18.017256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.207 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.208 [2024-12-12 09:31:18.148532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:44.208 [2024-12-12 09:31:18.151361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:44.208 [2024-12-12 09:31:18.151574] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:44.208 [2024-12-12 09:31:18.151665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:44.208 [2024-12-12 09:31:18.151685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.208 [2024-12-12 09:31:18.151698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:44.208 request: 00:17:44.208 { 00:17:44.208 "name": "raid_bdev1", 00:17:44.208 "raid_level": "raid1", 00:17:44.208 "base_bdevs": [ 00:17:44.208 "malloc1", 00:17:44.208 "malloc2" 00:17:44.208 ], 00:17:44.208 "superblock": false, 00:17:44.208 "method": "bdev_raid_create", 00:17:44.208 "req_id": 1 00:17:44.208 } 00:17:44.208 Got JSON-RPC error response 00:17:44.208 response: 00:17:44.208 { 00:17:44.208 "code": -17, 00:17:44.208 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:44.208 } 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.208 [2024-12-12 09:31:18.212527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:44.208 [2024-12-12 09:31:18.212771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.208 [2024-12-12 09:31:18.212830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:44.208 [2024-12-12 09:31:18.212870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.208 [2024-12-12 09:31:18.215748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.208 [2024-12-12 09:31:18.215874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:44.208 [2024-12-12 09:31:18.216005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:44.208 pt1 00:17:44.208 [2024-12-12 09:31:18.216119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.208 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.466 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.466 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.466 "name": "raid_bdev1", 00:17:44.466 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:44.466 "strip_size_kb": 0, 00:17:44.466 "state": "configuring", 00:17:44.466 "raid_level": "raid1", 00:17:44.466 "superblock": true, 00:17:44.466 "num_base_bdevs": 2, 00:17:44.466 "num_base_bdevs_discovered": 1, 00:17:44.466 "num_base_bdevs_operational": 2, 00:17:44.466 "base_bdevs_list": [ 00:17:44.466 { 00:17:44.466 "name": "pt1", 00:17:44.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.466 "is_configured": true, 00:17:44.466 "data_offset": 256, 00:17:44.466 "data_size": 7936 00:17:44.466 }, 00:17:44.466 { 00:17:44.466 "name": null, 00:17:44.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.466 "is_configured": false, 00:17:44.466 "data_offset": 256, 00:17:44.466 "data_size": 7936 00:17:44.466 } 00:17:44.466 ] 00:17:44.466 }' 00:17:44.466 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.466 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.726 [2024-12-12 09:31:18.704108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.726 [2024-12-12 09:31:18.704374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.726 [2024-12-12 09:31:18.704448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:44.726 [2024-12-12 09:31:18.704490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.726 [2024-12-12 09:31:18.704897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.726 [2024-12-12 09:31:18.704978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.726 [2024-12-12 09:31:18.705092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:44.726 [2024-12-12 09:31:18.705158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.726 [2024-12-12 09:31:18.705345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:44.726 [2024-12-12 09:31:18.705395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.726 [2024-12-12 09:31:18.705540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:44.726 [2024-12-12 09:31:18.705746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:44.726 [2024-12-12 09:31:18.705792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:44.726 [2024-12-12 09:31:18.706000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.726 pt2 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.726 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.984 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.984 "name": "raid_bdev1", 00:17:44.984 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:44.984 "strip_size_kb": 0, 00:17:44.984 "state": "online", 00:17:44.984 "raid_level": "raid1", 00:17:44.984 "superblock": true, 00:17:44.984 "num_base_bdevs": 2, 00:17:44.984 "num_base_bdevs_discovered": 2, 00:17:44.984 "num_base_bdevs_operational": 2, 00:17:44.984 "base_bdevs_list": [ 00:17:44.984 { 00:17:44.984 "name": "pt1", 00:17:44.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.984 "is_configured": true, 00:17:44.984 "data_offset": 256, 00:17:44.984 "data_size": 7936 00:17:44.984 }, 00:17:44.984 { 00:17:44.984 "name": "pt2", 00:17:44.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.984 "is_configured": true, 00:17:44.984 "data_offset": 256, 00:17:44.984 "data_size": 7936 00:17:44.984 } 00:17:44.984 ] 00:17:44.984 }' 00:17:44.984 09:31:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.984 09:31:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:45.248 [2024-12-12 09:31:19.223720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.248 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:45.248 "name": "raid_bdev1", 00:17:45.248 "aliases": [ 00:17:45.248 "31786b5d-8a5b-4405-8f12-213969e52989" 00:17:45.248 ], 00:17:45.248 "product_name": "Raid Volume", 00:17:45.248 "block_size": 4096, 00:17:45.248 "num_blocks": 7936, 00:17:45.248 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:45.248 "md_size": 32, 00:17:45.248 "md_interleave": false, 00:17:45.248 "dif_type": 0, 00:17:45.248 "assigned_rate_limits": { 00:17:45.248 "rw_ios_per_sec": 0, 00:17:45.248 "rw_mbytes_per_sec": 0, 00:17:45.248 "r_mbytes_per_sec": 0, 00:17:45.248 "w_mbytes_per_sec": 0 00:17:45.248 }, 00:17:45.248 "claimed": false, 00:17:45.248 "zoned": false, 00:17:45.248 "supported_io_types": { 00:17:45.248 "read": true, 00:17:45.248 "write": true, 00:17:45.248 "unmap": false, 00:17:45.248 "flush": false, 00:17:45.248 "reset": true, 00:17:45.248 "nvme_admin": false, 00:17:45.248 "nvme_io": false, 00:17:45.248 "nvme_io_md": false, 00:17:45.248 "write_zeroes": true, 00:17:45.248 "zcopy": false, 00:17:45.248 "get_zone_info": false, 00:17:45.248 "zone_management": false, 00:17:45.248 "zone_append": false, 00:17:45.248 "compare": false, 00:17:45.248 "compare_and_write": false, 00:17:45.248 "abort": false, 00:17:45.248 "seek_hole": false, 00:17:45.248 "seek_data": false, 00:17:45.248 "copy": false, 00:17:45.248 "nvme_iov_md": false 00:17:45.248 }, 00:17:45.248 "memory_domains": [ 00:17:45.248 { 00:17:45.248 "dma_device_id": "system", 00:17:45.248 "dma_device_type": 1 00:17:45.248 }, 00:17:45.248 { 00:17:45.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.248 "dma_device_type": 2 00:17:45.248 }, 00:17:45.248 { 00:17:45.248 "dma_device_id": "system", 00:17:45.248 "dma_device_type": 1 00:17:45.248 }, 00:17:45.248 { 00:17:45.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.248 "dma_device_type": 2 00:17:45.248 } 00:17:45.248 ], 00:17:45.248 "driver_specific": { 00:17:45.248 "raid": { 00:17:45.248 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:45.248 "strip_size_kb": 0, 00:17:45.248 "state": "online", 00:17:45.248 "raid_level": "raid1", 00:17:45.248 "superblock": true, 00:17:45.248 "num_base_bdevs": 2, 00:17:45.248 "num_base_bdevs_discovered": 2, 00:17:45.248 "num_base_bdevs_operational": 2, 00:17:45.248 "base_bdevs_list": [ 00:17:45.248 { 00:17:45.248 "name": "pt1", 00:17:45.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:45.248 "is_configured": true, 00:17:45.248 "data_offset": 256, 00:17:45.248 "data_size": 7936 00:17:45.248 }, 00:17:45.248 { 00:17:45.248 "name": "pt2", 00:17:45.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.248 "is_configured": true, 00:17:45.248 "data_offset": 256, 00:17:45.248 "data_size": 7936 00:17:45.248 } 00:17:45.248 ] 00:17:45.248 } 00:17:45.248 } 00:17:45.248 }' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:45.519 pt2' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 [2024-12-12 09:31:19.475398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 31786b5d-8a5b-4405-8f12-213969e52989 '!=' 31786b5d-8a5b-4405-8f12-213969e52989 ']' 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:45.519 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 [2024-12-12 09:31:19.519024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.778 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.778 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.778 "name": "raid_bdev1", 00:17:45.779 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:45.779 "strip_size_kb": 0, 00:17:45.779 "state": "online", 00:17:45.779 "raid_level": "raid1", 00:17:45.779 "superblock": true, 00:17:45.779 "num_base_bdevs": 2, 00:17:45.779 "num_base_bdevs_discovered": 1, 00:17:45.779 "num_base_bdevs_operational": 1, 00:17:45.779 "base_bdevs_list": [ 00:17:45.779 { 00:17:45.779 "name": null, 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.779 "is_configured": false, 00:17:45.779 "data_offset": 0, 00:17:45.779 "data_size": 7936 00:17:45.779 }, 00:17:45.779 { 00:17:45.779 "name": "pt2", 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.779 "is_configured": true, 00:17:45.779 "data_offset": 256, 00:17:45.779 "data_size": 7936 00:17:45.779 } 00:17:45.779 ] 00:17:45.779 }' 00:17:45.779 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.779 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 09:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.086 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.086 09:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 [2024-12-12 09:31:19.998194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.086 [2024-12-12 09:31:19.998383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.086 [2024-12-12 09:31:19.998519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.086 [2024-12-12 09:31:19.998592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.086 [2024-12-12 09:31:19.998608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 [2024-12-12 09:31:20.074083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.086 [2024-12-12 09:31:20.074312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.086 [2024-12-12 09:31:20.074360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:46.086 [2024-12-12 09:31:20.074404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.086 [2024-12-12 09:31:20.077338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.086 [2024-12-12 09:31:20.077488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.086 [2024-12-12 09:31:20.077613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:46.086 [2024-12-12 09:31:20.077719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.086 [2024-12-12 09:31:20.077887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:46.086 [2024-12-12 09:31:20.077934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.086 [2024-12-12 09:31:20.078108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:46.086 [2024-12-12 09:31:20.078309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:46.086 [2024-12-12 09:31:20.078354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:46.086 [2024-12-12 09:31:20.078611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.086 pt2 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.086 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.351 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.351 "name": "raid_bdev1", 00:17:46.351 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:46.351 "strip_size_kb": 0, 00:17:46.351 "state": "online", 00:17:46.351 "raid_level": "raid1", 00:17:46.351 "superblock": true, 00:17:46.351 "num_base_bdevs": 2, 00:17:46.351 "num_base_bdevs_discovered": 1, 00:17:46.351 "num_base_bdevs_operational": 1, 00:17:46.351 "base_bdevs_list": [ 00:17:46.351 { 00:17:46.351 "name": null, 00:17:46.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.351 "is_configured": false, 00:17:46.351 "data_offset": 256, 00:17:46.351 "data_size": 7936 00:17:46.351 }, 00:17:46.351 { 00:17:46.351 "name": "pt2", 00:17:46.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.351 "is_configured": true, 00:17:46.351 "data_offset": 256, 00:17:46.351 "data_size": 7936 00:17:46.351 } 00:17:46.351 ] 00:17:46.351 }' 00:17:46.351 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.351 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.611 [2024-12-12 09:31:20.577223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.611 [2024-12-12 09:31:20.577403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.611 [2024-12-12 09:31:20.577550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.611 [2024-12-12 09:31:20.577658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.611 [2024-12-12 09:31:20.577713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.611 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.870 [2024-12-12 09:31:20.641233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.870 [2024-12-12 09:31:20.641455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.870 [2024-12-12 09:31:20.641519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:46.870 [2024-12-12 09:31:20.641558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.870 [2024-12-12 09:31:20.644530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.870 [2024-12-12 09:31:20.644710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.870 [2024-12-12 09:31:20.644851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:46.870 [2024-12-12 09:31:20.644973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.870 [2024-12-12 09:31:20.645216] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:46.870 [2024-12-12 09:31:20.645278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.870 [2024-12-12 09:31:20.645363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:46.870 [2024-12-12 09:31:20.645514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.870 [2024-12-12 09:31:20.645735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:46.870 [2024-12-12 09:31:20.645787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.870 [2024-12-12 09:31:20.645922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.870 pt1 00:17:46.870 [2024-12-12 09:31:20.646130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:46.870 [2024-12-12 09:31:20.646149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:46.870 [2024-12-12 09:31:20.646307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.870 "name": "raid_bdev1", 00:17:46.870 "uuid": "31786b5d-8a5b-4405-8f12-213969e52989", 00:17:46.870 "strip_size_kb": 0, 00:17:46.870 "state": "online", 00:17:46.870 "raid_level": "raid1", 00:17:46.870 "superblock": true, 00:17:46.870 "num_base_bdevs": 2, 00:17:46.870 "num_base_bdevs_discovered": 1, 00:17:46.870 "num_base_bdevs_operational": 1, 00:17:46.870 "base_bdevs_list": [ 00:17:46.870 { 00:17:46.870 "name": null, 00:17:46.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.870 "is_configured": false, 00:17:46.870 "data_offset": 256, 00:17:46.870 "data_size": 7936 00:17:46.870 }, 00:17:46.870 { 00:17:46.870 "name": "pt2", 00:17:46.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.870 "is_configured": true, 00:17:46.870 "data_offset": 256, 00:17:46.870 "data_size": 7936 00:17:46.870 } 00:17:46.870 ] 00:17:46.870 }' 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.870 09:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.129 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:47.129 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.129 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.129 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.388 [2024-12-12 09:31:21.200870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 31786b5d-8a5b-4405-8f12-213969e52989 '!=' 31786b5d-8a5b-4405-8f12-213969e52989 ']' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88619 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88619 ']' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88619 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88619 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88619' 00:17:47.388 killing process with pid 88619 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88619 00:17:47.388 [2024-12-12 09:31:21.258753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.388 09:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88619 00:17:47.388 [2024-12-12 09:31:21.258900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.388 [2024-12-12 09:31:21.258988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.388 [2024-12-12 09:31:21.259013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:47.646 [2024-12-12 09:31:21.559261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.549 09:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:49.549 00:17:49.549 real 0m7.007s 00:17:49.549 user 0m10.361s 00:17:49.549 sys 0m1.227s 00:17:49.549 09:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.549 ************************************ 00:17:49.549 END TEST raid_superblock_test_md_separate 00:17:49.549 ************************************ 00:17:49.549 09:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.549 09:31:23 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:49.549 09:31:23 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:49.549 09:31:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:49.549 09:31:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.549 09:31:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.549 ************************************ 00:17:49.549 START TEST raid_rebuild_test_sb_md_separate 00:17:49.549 ************************************ 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88949 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88949 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88949 ']' 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.549 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.549 [2024-12-12 09:31:23.255155] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:49.549 [2024-12-12 09:31:23.255445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:49.549 Zero copy mechanism will not be used. 00:17:49.549 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88949 ] 00:17:49.549 [2024-12-12 09:31:23.441949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.809 [2024-12-12 09:31:23.611568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.069 [2024-12-12 09:31:23.894122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.069 [2024-12-12 09:31:23.894335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.343 BaseBdev1_malloc 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.343 [2024-12-12 09:31:24.306839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:50.343 [2024-12-12 09:31:24.307091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.343 [2024-12-12 09:31:24.307192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.343 [2024-12-12 09:31:24.307251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.343 [2024-12-12 09:31:24.310364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.343 [2024-12-12 09:31:24.310527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:50.343 BaseBdev1 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.343 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.620 BaseBdev2_malloc 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.620 [2024-12-12 09:31:24.379848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:50.620 [2024-12-12 09:31:24.380108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.620 [2024-12-12 09:31:24.380173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.620 [2024-12-12 09:31:24.380223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.620 [2024-12-12 09:31:24.383093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.620 [2024-12-12 09:31:24.383243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:50.620 BaseBdev2 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.620 spare_malloc 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.620 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.620 spare_delay 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.621 [2024-12-12 09:31:24.476857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.621 [2024-12-12 09:31:24.477170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.621 [2024-12-12 09:31:24.477237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:50.621 [2024-12-12 09:31:24.477313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.621 [2024-12-12 09:31:24.480227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.621 [2024-12-12 09:31:24.480382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.621 spare 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.621 [2024-12-12 09:31:24.488986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.621 [2024-12-12 09:31:24.491783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.621 [2024-12-12 09:31:24.492229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:50.621 [2024-12-12 09:31:24.492303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.621 [2024-12-12 09:31:24.492501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:50.621 [2024-12-12 09:31:24.492744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:50.621 [2024-12-12 09:31:24.492800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:50.621 [2024-12-12 09:31:24.493167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.621 "name": "raid_bdev1", 00:17:50.621 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:50.621 "strip_size_kb": 0, 00:17:50.621 "state": "online", 00:17:50.621 "raid_level": "raid1", 00:17:50.621 "superblock": true, 00:17:50.621 "num_base_bdevs": 2, 00:17:50.621 "num_base_bdevs_discovered": 2, 00:17:50.621 "num_base_bdevs_operational": 2, 00:17:50.621 "base_bdevs_list": [ 00:17:50.621 { 00:17:50.621 "name": "BaseBdev1", 00:17:50.621 "uuid": "7eaa116d-4521-5a7d-a271-7bafc875dd40", 00:17:50.621 "is_configured": true, 00:17:50.621 "data_offset": 256, 00:17:50.621 "data_size": 7936 00:17:50.621 }, 00:17:50.621 { 00:17:50.621 "name": "BaseBdev2", 00:17:50.621 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:50.621 "is_configured": true, 00:17:50.621 "data_offset": 256, 00:17:50.621 "data_size": 7936 00:17:50.621 } 00:17:50.621 ] 00:17:50.621 }' 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.621 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.190 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:51.191 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.191 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.191 09:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.191 [2024-12-12 09:31:24.984703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.191 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:51.450 [2024-12-12 09:31:25.340138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.450 /dev/nbd0 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.450 1+0 records in 00:17:51.450 1+0 records out 00:17:51.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054984 s, 7.4 MB/s 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:51.450 09:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:52.386 7936+0 records in 00:17:52.386 7936+0 records out 00:17:52.386 32505856 bytes (33 MB, 31 MiB) copied, 0.771446 s, 42.1 MB/s 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.386 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:52.644 [2024-12-12 09:31:26.433137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.644 [2024-12-12 09:31:26.475043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.644 "name": "raid_bdev1", 00:17:52.644 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:52.644 "strip_size_kb": 0, 00:17:52.644 "state": "online", 00:17:52.644 "raid_level": "raid1", 00:17:52.644 "superblock": true, 00:17:52.644 "num_base_bdevs": 2, 00:17:52.644 "num_base_bdevs_discovered": 1, 00:17:52.644 "num_base_bdevs_operational": 1, 00:17:52.644 "base_bdevs_list": [ 00:17:52.644 { 00:17:52.644 "name": null, 00:17:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.644 "is_configured": false, 00:17:52.644 "data_offset": 0, 00:17:52.644 "data_size": 7936 00:17:52.644 }, 00:17:52.644 { 00:17:52.644 "name": "BaseBdev2", 00:17:52.644 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:52.644 "is_configured": true, 00:17:52.644 "data_offset": 256, 00:17:52.644 "data_size": 7936 00:17:52.644 } 00:17:52.644 ] 00:17:52.644 }' 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.644 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.211 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.211 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.211 09:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.211 [2024-12-12 09:31:26.982273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.211 [2024-12-12 09:31:27.002210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:53.211 09:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.211 09:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:53.211 [2024-12-12 09:31:27.004902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.187 "name": "raid_bdev1", 00:17:54.187 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:54.187 "strip_size_kb": 0, 00:17:54.187 "state": "online", 00:17:54.187 "raid_level": "raid1", 00:17:54.187 "superblock": true, 00:17:54.187 "num_base_bdevs": 2, 00:17:54.187 "num_base_bdevs_discovered": 2, 00:17:54.187 "num_base_bdevs_operational": 2, 00:17:54.187 "process": { 00:17:54.187 "type": "rebuild", 00:17:54.187 "target": "spare", 00:17:54.187 "progress": { 00:17:54.187 "blocks": 2560, 00:17:54.187 "percent": 32 00:17:54.187 } 00:17:54.187 }, 00:17:54.187 "base_bdevs_list": [ 00:17:54.187 { 00:17:54.187 "name": "spare", 00:17:54.187 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:54.187 "is_configured": true, 00:17:54.187 "data_offset": 256, 00:17:54.187 "data_size": 7936 00:17:54.187 }, 00:17:54.187 { 00:17:54.187 "name": "BaseBdev2", 00:17:54.187 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:54.187 "is_configured": true, 00:17:54.187 "data_offset": 256, 00:17:54.187 "data_size": 7936 00:17:54.187 } 00:17:54.187 ] 00:17:54.187 }' 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.187 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.187 [2024-12-12 09:31:28.168934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.446 [2024-12-12 09:31:28.216901] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.446 [2024-12-12 09:31:28.217175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.446 [2024-12-12 09:31:28.217210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.446 [2024-12-12 09:31:28.217224] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.446 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.447 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.447 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.447 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.447 "name": "raid_bdev1", 00:17:54.447 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:54.447 "strip_size_kb": 0, 00:17:54.447 "state": "online", 00:17:54.447 "raid_level": "raid1", 00:17:54.447 "superblock": true, 00:17:54.447 "num_base_bdevs": 2, 00:17:54.447 "num_base_bdevs_discovered": 1, 00:17:54.447 "num_base_bdevs_operational": 1, 00:17:54.447 "base_bdevs_list": [ 00:17:54.447 { 00:17:54.447 "name": null, 00:17:54.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.447 "is_configured": false, 00:17:54.447 "data_offset": 0, 00:17:54.447 "data_size": 7936 00:17:54.447 }, 00:17:54.447 { 00:17:54.447 "name": "BaseBdev2", 00:17:54.447 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:54.447 "is_configured": true, 00:17:54.447 "data_offset": 256, 00:17:54.447 "data_size": 7936 00:17:54.447 } 00:17:54.447 ] 00:17:54.447 }' 00:17:54.447 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.447 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.705 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.963 "name": "raid_bdev1", 00:17:54.963 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:54.963 "strip_size_kb": 0, 00:17:54.963 "state": "online", 00:17:54.963 "raid_level": "raid1", 00:17:54.963 "superblock": true, 00:17:54.963 "num_base_bdevs": 2, 00:17:54.963 "num_base_bdevs_discovered": 1, 00:17:54.963 "num_base_bdevs_operational": 1, 00:17:54.963 "base_bdevs_list": [ 00:17:54.963 { 00:17:54.963 "name": null, 00:17:54.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.963 "is_configured": false, 00:17:54.963 "data_offset": 0, 00:17:54.963 "data_size": 7936 00:17:54.963 }, 00:17:54.963 { 00:17:54.963 "name": "BaseBdev2", 00:17:54.963 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:54.963 "is_configured": true, 00:17:54.963 "data_offset": 256, 00:17:54.963 "data_size": 7936 00:17:54.963 } 00:17:54.963 ] 00:17:54.963 }' 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.963 [2024-12-12 09:31:28.828055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.963 [2024-12-12 09:31:28.845258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.963 09:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:54.963 [2024-12-12 09:31:28.848173] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.897 "name": "raid_bdev1", 00:17:55.897 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:55.897 "strip_size_kb": 0, 00:17:55.897 "state": "online", 00:17:55.897 "raid_level": "raid1", 00:17:55.897 "superblock": true, 00:17:55.897 "num_base_bdevs": 2, 00:17:55.897 "num_base_bdevs_discovered": 2, 00:17:55.897 "num_base_bdevs_operational": 2, 00:17:55.897 "process": { 00:17:55.897 "type": "rebuild", 00:17:55.897 "target": "spare", 00:17:55.897 "progress": { 00:17:55.897 "blocks": 2560, 00:17:55.897 "percent": 32 00:17:55.897 } 00:17:55.897 }, 00:17:55.897 "base_bdevs_list": [ 00:17:55.897 { 00:17:55.897 "name": "spare", 00:17:55.897 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:55.897 "is_configured": true, 00:17:55.897 "data_offset": 256, 00:17:55.897 "data_size": 7936 00:17:55.897 }, 00:17:55.897 { 00:17:55.897 "name": "BaseBdev2", 00:17:55.897 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:55.897 "is_configured": true, 00:17:55.897 "data_offset": 256, 00:17:55.897 "data_size": 7936 00:17:55.897 } 00:17:55.897 ] 00:17:55.897 }' 00:17:55.897 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:56.156 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=715 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.156 09:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.156 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.156 "name": "raid_bdev1", 00:17:56.156 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:56.156 "strip_size_kb": 0, 00:17:56.156 "state": "online", 00:17:56.156 "raid_level": "raid1", 00:17:56.156 "superblock": true, 00:17:56.156 "num_base_bdevs": 2, 00:17:56.156 "num_base_bdevs_discovered": 2, 00:17:56.156 "num_base_bdevs_operational": 2, 00:17:56.156 "process": { 00:17:56.156 "type": "rebuild", 00:17:56.156 "target": "spare", 00:17:56.156 "progress": { 00:17:56.156 "blocks": 2816, 00:17:56.157 "percent": 35 00:17:56.157 } 00:17:56.157 }, 00:17:56.157 "base_bdevs_list": [ 00:17:56.157 { 00:17:56.157 "name": "spare", 00:17:56.157 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:56.157 "is_configured": true, 00:17:56.157 "data_offset": 256, 00:17:56.157 "data_size": 7936 00:17:56.157 }, 00:17:56.157 { 00:17:56.157 "name": "BaseBdev2", 00:17:56.157 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:56.157 "is_configured": true, 00:17:56.157 "data_offset": 256, 00:17:56.157 "data_size": 7936 00:17:56.157 } 00:17:56.157 ] 00:17:56.157 }' 00:17:56.157 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.157 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.157 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.157 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.157 09:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.531 "name": "raid_bdev1", 00:17:57.531 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:57.531 "strip_size_kb": 0, 00:17:57.531 "state": "online", 00:17:57.531 "raid_level": "raid1", 00:17:57.531 "superblock": true, 00:17:57.531 "num_base_bdevs": 2, 00:17:57.531 "num_base_bdevs_discovered": 2, 00:17:57.531 "num_base_bdevs_operational": 2, 00:17:57.531 "process": { 00:17:57.531 "type": "rebuild", 00:17:57.531 "target": "spare", 00:17:57.531 "progress": { 00:17:57.531 "blocks": 5632, 00:17:57.531 "percent": 70 00:17:57.531 } 00:17:57.531 }, 00:17:57.531 "base_bdevs_list": [ 00:17:57.531 { 00:17:57.531 "name": "spare", 00:17:57.531 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:57.531 "is_configured": true, 00:17:57.531 "data_offset": 256, 00:17:57.531 "data_size": 7936 00:17:57.531 }, 00:17:57.531 { 00:17:57.531 "name": "BaseBdev2", 00:17:57.531 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:57.531 "is_configured": true, 00:17:57.531 "data_offset": 256, 00:17:57.531 "data_size": 7936 00:17:57.531 } 00:17:57.531 ] 00:17:57.531 }' 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.531 09:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.097 [2024-12-12 09:31:31.978473] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:58.097 [2024-12-12 09:31:31.978596] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:58.097 [2024-12-12 09:31:31.978765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.355 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.355 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.355 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.356 "name": "raid_bdev1", 00:17:58.356 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:58.356 "strip_size_kb": 0, 00:17:58.356 "state": "online", 00:17:58.356 "raid_level": "raid1", 00:17:58.356 "superblock": true, 00:17:58.356 "num_base_bdevs": 2, 00:17:58.356 "num_base_bdevs_discovered": 2, 00:17:58.356 "num_base_bdevs_operational": 2, 00:17:58.356 "base_bdevs_list": [ 00:17:58.356 { 00:17:58.356 "name": "spare", 00:17:58.356 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:58.356 "is_configured": true, 00:17:58.356 "data_offset": 256, 00:17:58.356 "data_size": 7936 00:17:58.356 }, 00:17:58.356 { 00:17:58.356 "name": "BaseBdev2", 00:17:58.356 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:58.356 "is_configured": true, 00:17:58.356 "data_offset": 256, 00:17:58.356 "data_size": 7936 00:17:58.356 } 00:17:58.356 ] 00:17:58.356 }' 00:17:58.356 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.614 "name": "raid_bdev1", 00:17:58.614 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:58.614 "strip_size_kb": 0, 00:17:58.614 "state": "online", 00:17:58.614 "raid_level": "raid1", 00:17:58.614 "superblock": true, 00:17:58.614 "num_base_bdevs": 2, 00:17:58.614 "num_base_bdevs_discovered": 2, 00:17:58.614 "num_base_bdevs_operational": 2, 00:17:58.614 "base_bdevs_list": [ 00:17:58.614 { 00:17:58.614 "name": "spare", 00:17:58.614 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:58.614 "is_configured": true, 00:17:58.614 "data_offset": 256, 00:17:58.614 "data_size": 7936 00:17:58.614 }, 00:17:58.614 { 00:17:58.614 "name": "BaseBdev2", 00:17:58.614 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:58.614 "is_configured": true, 00:17:58.614 "data_offset": 256, 00:17:58.614 "data_size": 7936 00:17:58.614 } 00:17:58.614 ] 00:17:58.614 }' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.614 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.615 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.875 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.875 "name": "raid_bdev1", 00:17:58.875 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:17:58.875 "strip_size_kb": 0, 00:17:58.875 "state": "online", 00:17:58.875 "raid_level": "raid1", 00:17:58.875 "superblock": true, 00:17:58.875 "num_base_bdevs": 2, 00:17:58.875 "num_base_bdevs_discovered": 2, 00:17:58.875 "num_base_bdevs_operational": 2, 00:17:58.875 "base_bdevs_list": [ 00:17:58.875 { 00:17:58.875 "name": "spare", 00:17:58.875 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:17:58.875 "is_configured": true, 00:17:58.875 "data_offset": 256, 00:17:58.875 "data_size": 7936 00:17:58.875 }, 00:17:58.875 { 00:17:58.875 "name": "BaseBdev2", 00:17:58.875 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:17:58.875 "is_configured": true, 00:17:58.875 "data_offset": 256, 00:17:58.875 "data_size": 7936 00:17:58.875 } 00:17:58.875 ] 00:17:58.875 }' 00:17:58.875 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.875 09:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.134 [2024-12-12 09:31:33.038045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.134 [2024-12-12 09:31:33.038170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.134 [2024-12-12 09:31:33.038339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.134 [2024-12-12 09:31:33.038482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.134 [2024-12-12 09:31:33.038563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.134 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:59.397 /dev/nbd0 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.397 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.398 1+0 records in 00:17:59.398 1+0 records out 00:17:59.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545388 s, 7.5 MB/s 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.398 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:59.658 /dev/nbd1 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.658 1+0 records in 00:17:59.658 1+0 records out 00:17:59.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527275 s, 7.8 MB/s 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.658 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.917 09:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.175 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.434 [2024-12-12 09:31:34.358863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.434 [2024-12-12 09:31:34.359045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.434 [2024-12-12 09:31:34.359101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:00.434 [2024-12-12 09:31:34.359149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.434 [2024-12-12 09:31:34.361887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.434 spare 00:18:00.434 [2024-12-12 09:31:34.361991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.434 [2024-12-12 09:31:34.362121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:00.434 [2024-12-12 09:31:34.362202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.434 [2024-12-12 09:31:34.362423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.434 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.693 [2024-12-12 09:31:34.462354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:00.693 [2024-12-12 09:31:34.462561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.693 [2024-12-12 09:31:34.462822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:00.693 [2024-12-12 09:31:34.463119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:00.693 [2024-12-12 09:31:34.463174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:00.693 [2024-12-12 09:31:34.463434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.693 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.693 "name": "raid_bdev1", 00:18:00.693 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:00.693 "strip_size_kb": 0, 00:18:00.693 "state": "online", 00:18:00.693 "raid_level": "raid1", 00:18:00.693 "superblock": true, 00:18:00.693 "num_base_bdevs": 2, 00:18:00.693 "num_base_bdevs_discovered": 2, 00:18:00.693 "num_base_bdevs_operational": 2, 00:18:00.693 "base_bdevs_list": [ 00:18:00.693 { 00:18:00.693 "name": "spare", 00:18:00.694 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 256, 00:18:00.694 "data_size": 7936 00:18:00.694 }, 00:18:00.694 { 00:18:00.694 "name": "BaseBdev2", 00:18:00.694 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 256, 00:18:00.694 "data_size": 7936 00:18:00.694 } 00:18:00.694 ] 00:18:00.694 }' 00:18:00.694 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.694 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.952 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.953 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.211 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.211 "name": "raid_bdev1", 00:18:01.211 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:01.211 "strip_size_kb": 0, 00:18:01.211 "state": "online", 00:18:01.211 "raid_level": "raid1", 00:18:01.211 "superblock": true, 00:18:01.211 "num_base_bdevs": 2, 00:18:01.211 "num_base_bdevs_discovered": 2, 00:18:01.211 "num_base_bdevs_operational": 2, 00:18:01.211 "base_bdevs_list": [ 00:18:01.211 { 00:18:01.211 "name": "spare", 00:18:01.211 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:18:01.211 "is_configured": true, 00:18:01.211 "data_offset": 256, 00:18:01.211 "data_size": 7936 00:18:01.211 }, 00:18:01.211 { 00:18:01.211 "name": "BaseBdev2", 00:18:01.211 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:01.211 "is_configured": true, 00:18:01.211 "data_offset": 256, 00:18:01.211 "data_size": 7936 00:18:01.211 } 00:18:01.211 ] 00:18:01.211 }' 00:18:01.211 09:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.211 [2024-12-12 09:31:35.130447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.211 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.211 "name": "raid_bdev1", 00:18:01.211 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:01.211 "strip_size_kb": 0, 00:18:01.211 "state": "online", 00:18:01.211 "raid_level": "raid1", 00:18:01.211 "superblock": true, 00:18:01.211 "num_base_bdevs": 2, 00:18:01.211 "num_base_bdevs_discovered": 1, 00:18:01.211 "num_base_bdevs_operational": 1, 00:18:01.211 "base_bdevs_list": [ 00:18:01.211 { 00:18:01.211 "name": null, 00:18:01.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.211 "is_configured": false, 00:18:01.211 "data_offset": 0, 00:18:01.211 "data_size": 7936 00:18:01.211 }, 00:18:01.211 { 00:18:01.211 "name": "BaseBdev2", 00:18:01.211 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:01.211 "is_configured": true, 00:18:01.211 "data_offset": 256, 00:18:01.211 "data_size": 7936 00:18:01.211 } 00:18:01.212 ] 00:18:01.212 }' 00:18:01.212 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.212 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.777 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.777 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.777 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.777 [2024-12-12 09:31:35.557738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.777 [2024-12-12 09:31:35.558169] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.777 [2024-12-12 09:31:35.558199] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:01.777 [2024-12-12 09:31:35.558256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.777 [2024-12-12 09:31:35.574412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:01.777 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.777 09:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:01.777 [2024-12-12 09:31:35.577152] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.741 "name": "raid_bdev1", 00:18:02.741 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:02.741 "strip_size_kb": 0, 00:18:02.741 "state": "online", 00:18:02.741 "raid_level": "raid1", 00:18:02.741 "superblock": true, 00:18:02.741 "num_base_bdevs": 2, 00:18:02.741 "num_base_bdevs_discovered": 2, 00:18:02.741 "num_base_bdevs_operational": 2, 00:18:02.741 "process": { 00:18:02.741 "type": "rebuild", 00:18:02.741 "target": "spare", 00:18:02.741 "progress": { 00:18:02.741 "blocks": 2560, 00:18:02.741 "percent": 32 00:18:02.741 } 00:18:02.741 }, 00:18:02.741 "base_bdevs_list": [ 00:18:02.741 { 00:18:02.741 "name": "spare", 00:18:02.741 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:18:02.741 "is_configured": true, 00:18:02.741 "data_offset": 256, 00:18:02.741 "data_size": 7936 00:18:02.741 }, 00:18:02.741 { 00:18:02.741 "name": "BaseBdev2", 00:18:02.741 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:02.741 "is_configured": true, 00:18:02.741 "data_offset": 256, 00:18:02.741 "data_size": 7936 00:18:02.741 } 00:18:02.741 ] 00:18:02.741 }' 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.741 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.741 [2024-12-12 09:31:36.685765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.741 [2024-12-12 09:31:36.687705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.741 [2024-12-12 09:31:36.687782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.741 [2024-12-12 09:31:36.687800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.741 [2024-12-12 09:31:36.687810] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.742 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.047 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.047 "name": "raid_bdev1", 00:18:03.047 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:03.047 "strip_size_kb": 0, 00:18:03.047 "state": "online", 00:18:03.047 "raid_level": "raid1", 00:18:03.047 "superblock": true, 00:18:03.047 "num_base_bdevs": 2, 00:18:03.047 "num_base_bdevs_discovered": 1, 00:18:03.047 "num_base_bdevs_operational": 1, 00:18:03.047 "base_bdevs_list": [ 00:18:03.047 { 00:18:03.047 "name": null, 00:18:03.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.047 "is_configured": false, 00:18:03.047 "data_offset": 0, 00:18:03.047 "data_size": 7936 00:18:03.047 }, 00:18:03.047 { 00:18:03.047 "name": "BaseBdev2", 00:18:03.047 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:03.047 "is_configured": true, 00:18:03.047 "data_offset": 256, 00:18:03.047 "data_size": 7936 00:18:03.047 } 00:18:03.047 ] 00:18:03.047 }' 00:18:03.047 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.047 09:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.305 09:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.306 09:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.306 09:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.306 [2024-12-12 09:31:37.190819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.306 [2024-12-12 09:31:37.191005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.306 [2024-12-12 09:31:37.191076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:03.306 [2024-12-12 09:31:37.191126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.306 [2024-12-12 09:31:37.191528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.306 [2024-12-12 09:31:37.191603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.306 [2024-12-12 09:31:37.191728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.306 [2024-12-12 09:31:37.191791] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.306 [2024-12-12 09:31:37.191840] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.306 [2024-12-12 09:31:37.191906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.306 [2024-12-12 09:31:37.207868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:03.306 spare 00:18:03.306 09:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.306 09:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:03.306 [2024-12-12 09:31:37.210595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.239 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.239 "name": "raid_bdev1", 00:18:04.239 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:04.240 "strip_size_kb": 0, 00:18:04.240 "state": "online", 00:18:04.240 "raid_level": "raid1", 00:18:04.240 "superblock": true, 00:18:04.240 "num_base_bdevs": 2, 00:18:04.240 "num_base_bdevs_discovered": 2, 00:18:04.240 "num_base_bdevs_operational": 2, 00:18:04.240 "process": { 00:18:04.240 "type": "rebuild", 00:18:04.240 "target": "spare", 00:18:04.240 "progress": { 00:18:04.240 "blocks": 2560, 00:18:04.240 "percent": 32 00:18:04.240 } 00:18:04.240 }, 00:18:04.240 "base_bdevs_list": [ 00:18:04.240 { 00:18:04.240 "name": "spare", 00:18:04.240 "uuid": "02cd90f9-c397-53f0-850e-473232c97008", 00:18:04.240 "is_configured": true, 00:18:04.240 "data_offset": 256, 00:18:04.240 "data_size": 7936 00:18:04.240 }, 00:18:04.240 { 00:18:04.240 "name": "BaseBdev2", 00:18:04.240 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:04.240 "is_configured": true, 00:18:04.240 "data_offset": 256, 00:18:04.240 "data_size": 7936 00:18:04.240 } 00:18:04.240 ] 00:18:04.240 }' 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.498 [2024-12-12 09:31:38.358304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.498 [2024-12-12 09:31:38.422124] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.498 [2024-12-12 09:31:38.422237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.498 [2024-12-12 09:31:38.422262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.498 [2024-12-12 09:31:38.422271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.498 "name": "raid_bdev1", 00:18:04.498 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:04.498 "strip_size_kb": 0, 00:18:04.498 "state": "online", 00:18:04.498 "raid_level": "raid1", 00:18:04.498 "superblock": true, 00:18:04.498 "num_base_bdevs": 2, 00:18:04.498 "num_base_bdevs_discovered": 1, 00:18:04.498 "num_base_bdevs_operational": 1, 00:18:04.498 "base_bdevs_list": [ 00:18:04.498 { 00:18:04.498 "name": null, 00:18:04.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.498 "is_configured": false, 00:18:04.498 "data_offset": 0, 00:18:04.498 "data_size": 7936 00:18:04.498 }, 00:18:04.498 { 00:18:04.498 "name": "BaseBdev2", 00:18:04.498 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:04.498 "is_configured": true, 00:18:04.498 "data_offset": 256, 00:18:04.498 "data_size": 7936 00:18:04.498 } 00:18:04.498 ] 00:18:04.498 }' 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.498 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.065 "name": "raid_bdev1", 00:18:05.065 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:05.065 "strip_size_kb": 0, 00:18:05.065 "state": "online", 00:18:05.065 "raid_level": "raid1", 00:18:05.065 "superblock": true, 00:18:05.065 "num_base_bdevs": 2, 00:18:05.065 "num_base_bdevs_discovered": 1, 00:18:05.065 "num_base_bdevs_operational": 1, 00:18:05.065 "base_bdevs_list": [ 00:18:05.065 { 00:18:05.065 "name": null, 00:18:05.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.065 "is_configured": false, 00:18:05.065 "data_offset": 0, 00:18:05.065 "data_size": 7936 00:18:05.065 }, 00:18:05.065 { 00:18:05.065 "name": "BaseBdev2", 00:18:05.065 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:05.065 "is_configured": true, 00:18:05.065 "data_offset": 256, 00:18:05.065 "data_size": 7936 00:18:05.065 } 00:18:05.065 ] 00:18:05.065 }' 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.065 09:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.065 [2024-12-12 09:31:39.036646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.065 [2024-12-12 09:31:39.036799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.065 [2024-12-12 09:31:39.036854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:05.065 [2024-12-12 09:31:39.036868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.065 [2024-12-12 09:31:39.037196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.065 [2024-12-12 09:31:39.037214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.065 [2024-12-12 09:31:39.037297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:05.065 [2024-12-12 09:31:39.037313] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.065 [2024-12-12 09:31:39.037326] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:05.065 [2024-12-12 09:31:39.037341] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:05.065 BaseBdev1 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.065 09:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.441 "name": "raid_bdev1", 00:18:06.441 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:06.441 "strip_size_kb": 0, 00:18:06.441 "state": "online", 00:18:06.441 "raid_level": "raid1", 00:18:06.441 "superblock": true, 00:18:06.441 "num_base_bdevs": 2, 00:18:06.441 "num_base_bdevs_discovered": 1, 00:18:06.441 "num_base_bdevs_operational": 1, 00:18:06.441 "base_bdevs_list": [ 00:18:06.441 { 00:18:06.441 "name": null, 00:18:06.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.441 "is_configured": false, 00:18:06.441 "data_offset": 0, 00:18:06.441 "data_size": 7936 00:18:06.441 }, 00:18:06.441 { 00:18:06.441 "name": "BaseBdev2", 00:18:06.441 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:06.441 "is_configured": true, 00:18:06.441 "data_offset": 256, 00:18:06.441 "data_size": 7936 00:18:06.441 } 00:18:06.441 ] 00:18:06.441 }' 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.441 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.700 "name": "raid_bdev1", 00:18:06.700 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:06.700 "strip_size_kb": 0, 00:18:06.700 "state": "online", 00:18:06.700 "raid_level": "raid1", 00:18:06.700 "superblock": true, 00:18:06.700 "num_base_bdevs": 2, 00:18:06.700 "num_base_bdevs_discovered": 1, 00:18:06.700 "num_base_bdevs_operational": 1, 00:18:06.700 "base_bdevs_list": [ 00:18:06.700 { 00:18:06.700 "name": null, 00:18:06.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.700 "is_configured": false, 00:18:06.700 "data_offset": 0, 00:18:06.700 "data_size": 7936 00:18:06.700 }, 00:18:06.700 { 00:18:06.700 "name": "BaseBdev2", 00:18:06.700 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:06.700 "is_configured": true, 00:18:06.700 "data_offset": 256, 00:18:06.700 "data_size": 7936 00:18:06.700 } 00:18:06.700 ] 00:18:06.700 }' 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.700 [2024-12-12 09:31:40.686224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.700 [2024-12-12 09:31:40.686553] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.700 [2024-12-12 09:31:40.686625] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:06.700 request: 00:18:06.700 { 00:18:06.700 "base_bdev": "BaseBdev1", 00:18:06.700 "raid_bdev": "raid_bdev1", 00:18:06.700 "method": "bdev_raid_add_base_bdev", 00:18:06.700 "req_id": 1 00:18:06.700 } 00:18:06.700 Got JSON-RPC error response 00:18:06.700 response: 00:18:06.700 { 00:18:06.700 "code": -22, 00:18:06.700 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:06.700 } 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.700 09:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.080 "name": "raid_bdev1", 00:18:08.080 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:08.080 "strip_size_kb": 0, 00:18:08.080 "state": "online", 00:18:08.080 "raid_level": "raid1", 00:18:08.080 "superblock": true, 00:18:08.080 "num_base_bdevs": 2, 00:18:08.080 "num_base_bdevs_discovered": 1, 00:18:08.080 "num_base_bdevs_operational": 1, 00:18:08.080 "base_bdevs_list": [ 00:18:08.080 { 00:18:08.080 "name": null, 00:18:08.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.080 "is_configured": false, 00:18:08.080 "data_offset": 0, 00:18:08.080 "data_size": 7936 00:18:08.080 }, 00:18:08.080 { 00:18:08.080 "name": "BaseBdev2", 00:18:08.080 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:08.080 "is_configured": true, 00:18:08.080 "data_offset": 256, 00:18:08.080 "data_size": 7936 00:18:08.080 } 00:18:08.080 ] 00:18:08.080 }' 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.080 09:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.340 "name": "raid_bdev1", 00:18:08.340 "uuid": "af3554cc-850d-44f7-b019-b8726ad65715", 00:18:08.340 "strip_size_kb": 0, 00:18:08.340 "state": "online", 00:18:08.340 "raid_level": "raid1", 00:18:08.340 "superblock": true, 00:18:08.340 "num_base_bdevs": 2, 00:18:08.340 "num_base_bdevs_discovered": 1, 00:18:08.340 "num_base_bdevs_operational": 1, 00:18:08.340 "base_bdevs_list": [ 00:18:08.340 { 00:18:08.340 "name": null, 00:18:08.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.340 "is_configured": false, 00:18:08.340 "data_offset": 0, 00:18:08.340 "data_size": 7936 00:18:08.340 }, 00:18:08.340 { 00:18:08.340 "name": "BaseBdev2", 00:18:08.340 "uuid": "ca7d0da5-271b-575d-b8fc-67427666b69b", 00:18:08.340 "is_configured": true, 00:18:08.340 "data_offset": 256, 00:18:08.340 "data_size": 7936 00:18:08.340 } 00:18:08.340 ] 00:18:08.340 }' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88949 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88949 ']' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88949 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88949 00:18:08.340 killing process with pid 88949 00:18:08.340 Received shutdown signal, test time was about 60.000000 seconds 00:18:08.340 00:18:08.340 Latency(us) 00:18:08.340 [2024-12-12T09:31:42.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.340 [2024-12-12T09:31:42.363Z] =================================================================================================================== 00:18:08.340 [2024-12-12T09:31:42.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88949' 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88949 00:18:08.340 09:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88949 00:18:08.340 [2024-12-12 09:31:42.350117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.340 [2024-12-12 09:31:42.350303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.340 [2024-12-12 09:31:42.350377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.340 [2024-12-12 09:31:42.350391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:08.909 [2024-12-12 09:31:42.750258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.287 09:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:10.287 00:18:10.287 real 0m21.010s 00:18:10.287 user 0m27.280s 00:18:10.287 sys 0m2.938s 00:18:10.287 09:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.287 09:31:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.287 ************************************ 00:18:10.287 END TEST raid_rebuild_test_sb_md_separate 00:18:10.287 ************************************ 00:18:10.287 09:31:44 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:10.287 09:31:44 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:10.287 09:31:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:10.287 09:31:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.287 09:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.287 ************************************ 00:18:10.287 START TEST raid_state_function_test_sb_md_interleaved 00:18:10.287 ************************************ 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:10.287 Process raid pid: 89652 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89652 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89652' 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89652 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89652 ']' 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.287 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.288 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.288 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.288 09:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.547 [2024-12-12 09:31:44.323198] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:10.547 [2024-12-12 09:31:44.323488] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.547 [2024-12-12 09:31:44.508998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.807 [2024-12-12 09:31:44.668539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.067 [2024-12-12 09:31:44.937775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.067 [2024-12-12 09:31:44.938007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.325 [2024-12-12 09:31:45.297945] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.325 [2024-12-12 09:31:45.298091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.325 [2024-12-12 09:31:45.298130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.325 [2024-12-12 09:31:45.298159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.325 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.583 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.583 "name": "Existed_Raid", 00:18:11.583 "uuid": "b15334d3-b862-46f1-852e-79b7386175c2", 00:18:11.583 "strip_size_kb": 0, 00:18:11.583 "state": "configuring", 00:18:11.583 "raid_level": "raid1", 00:18:11.583 "superblock": true, 00:18:11.583 "num_base_bdevs": 2, 00:18:11.583 "num_base_bdevs_discovered": 0, 00:18:11.583 "num_base_bdevs_operational": 2, 00:18:11.583 "base_bdevs_list": [ 00:18:11.583 { 00:18:11.583 "name": "BaseBdev1", 00:18:11.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.583 "is_configured": false, 00:18:11.583 "data_offset": 0, 00:18:11.583 "data_size": 0 00:18:11.583 }, 00:18:11.583 { 00:18:11.583 "name": "BaseBdev2", 00:18:11.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.583 "is_configured": false, 00:18:11.583 "data_offset": 0, 00:18:11.583 "data_size": 0 00:18:11.583 } 00:18:11.583 ] 00:18:11.583 }' 00:18:11.583 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.583 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.844 [2024-12-12 09:31:45.741143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.844 [2024-12-12 09:31:45.741304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.844 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 [2024-12-12 09:31:45.749125] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.845 [2024-12-12 09:31:45.749226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.845 [2024-12-12 09:31:45.749266] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.845 [2024-12-12 09:31:45.749299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 [2024-12-12 09:31:45.803680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.845 BaseBdev1 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 [ 00:18:11.845 { 00:18:11.845 "name": "BaseBdev1", 00:18:11.845 "aliases": [ 00:18:11.845 "8be46d06-1962-4a54-828d-562726531220" 00:18:11.845 ], 00:18:11.845 "product_name": "Malloc disk", 00:18:11.845 "block_size": 4128, 00:18:11.845 "num_blocks": 8192, 00:18:11.845 "uuid": "8be46d06-1962-4a54-828d-562726531220", 00:18:11.845 "md_size": 32, 00:18:11.845 "md_interleave": true, 00:18:11.845 "dif_type": 0, 00:18:11.845 "assigned_rate_limits": { 00:18:11.845 "rw_ios_per_sec": 0, 00:18:11.845 "rw_mbytes_per_sec": 0, 00:18:11.845 "r_mbytes_per_sec": 0, 00:18:11.845 "w_mbytes_per_sec": 0 00:18:11.845 }, 00:18:11.845 "claimed": true, 00:18:11.845 "claim_type": "exclusive_write", 00:18:11.845 "zoned": false, 00:18:11.845 "supported_io_types": { 00:18:11.845 "read": true, 00:18:11.845 "write": true, 00:18:11.845 "unmap": true, 00:18:11.845 "flush": true, 00:18:11.845 "reset": true, 00:18:11.845 "nvme_admin": false, 00:18:11.845 "nvme_io": false, 00:18:11.845 "nvme_io_md": false, 00:18:11.845 "write_zeroes": true, 00:18:11.845 "zcopy": true, 00:18:11.845 "get_zone_info": false, 00:18:11.845 "zone_management": false, 00:18:11.845 "zone_append": false, 00:18:11.845 "compare": false, 00:18:11.845 "compare_and_write": false, 00:18:11.845 "abort": true, 00:18:11.845 "seek_hole": false, 00:18:11.845 "seek_data": false, 00:18:11.845 "copy": true, 00:18:11.845 "nvme_iov_md": false 00:18:11.845 }, 00:18:11.845 "memory_domains": [ 00:18:11.845 { 00:18:11.845 "dma_device_id": "system", 00:18:11.845 "dma_device_type": 1 00:18:11.845 }, 00:18:11.845 { 00:18:11.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.845 "dma_device_type": 2 00:18:11.845 } 00:18:11.845 ], 00:18:11.845 "driver_specific": {} 00:18:11.845 } 00:18:11.845 ] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.104 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.104 "name": "Existed_Raid", 00:18:12.104 "uuid": "cdfdc011-8814-4092-9475-30687fe83174", 00:18:12.104 "strip_size_kb": 0, 00:18:12.104 "state": "configuring", 00:18:12.104 "raid_level": "raid1", 00:18:12.104 "superblock": true, 00:18:12.104 "num_base_bdevs": 2, 00:18:12.104 "num_base_bdevs_discovered": 1, 00:18:12.104 "num_base_bdevs_operational": 2, 00:18:12.104 "base_bdevs_list": [ 00:18:12.104 { 00:18:12.104 "name": "BaseBdev1", 00:18:12.104 "uuid": "8be46d06-1962-4a54-828d-562726531220", 00:18:12.104 "is_configured": true, 00:18:12.104 "data_offset": 256, 00:18:12.104 "data_size": 7936 00:18:12.104 }, 00:18:12.104 { 00:18:12.104 "name": "BaseBdev2", 00:18:12.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.104 "is_configured": false, 00:18:12.104 "data_offset": 0, 00:18:12.104 "data_size": 0 00:18:12.104 } 00:18:12.104 ] 00:18:12.104 }' 00:18:12.104 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.104 09:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.380 [2024-12-12 09:31:46.287033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.380 [2024-12-12 09:31:46.287197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.380 [2024-12-12 09:31:46.295069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.380 [2024-12-12 09:31:46.297507] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.380 [2024-12-12 09:31:46.297606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.380 "name": "Existed_Raid", 00:18:12.380 "uuid": "c002e596-b1fe-4f30-9233-207ec8e41969", 00:18:12.380 "strip_size_kb": 0, 00:18:12.380 "state": "configuring", 00:18:12.380 "raid_level": "raid1", 00:18:12.380 "superblock": true, 00:18:12.380 "num_base_bdevs": 2, 00:18:12.380 "num_base_bdevs_discovered": 1, 00:18:12.380 "num_base_bdevs_operational": 2, 00:18:12.380 "base_bdevs_list": [ 00:18:12.380 { 00:18:12.380 "name": "BaseBdev1", 00:18:12.380 "uuid": "8be46d06-1962-4a54-828d-562726531220", 00:18:12.380 "is_configured": true, 00:18:12.380 "data_offset": 256, 00:18:12.380 "data_size": 7936 00:18:12.380 }, 00:18:12.380 { 00:18:12.380 "name": "BaseBdev2", 00:18:12.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.380 "is_configured": false, 00:18:12.380 "data_offset": 0, 00:18:12.380 "data_size": 0 00:18:12.380 } 00:18:12.380 ] 00:18:12.380 }' 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.380 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.962 [2024-12-12 09:31:46.754108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.962 BaseBdev2 00:18:12.962 [2024-12-12 09:31:46.754545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.962 [2024-12-12 09:31:46.754568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:12.962 [2024-12-12 09:31:46.754668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:12.962 [2024-12-12 09:31:46.754764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.962 [2024-12-12 09:31:46.754777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:12.962 [2024-12-12 09:31:46.754856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.962 [ 00:18:12.962 { 00:18:12.962 "name": "BaseBdev2", 00:18:12.962 "aliases": [ 00:18:12.962 "7b3ba9e4-aee5-4e80-b4ac-c93b2dc9c087" 00:18:12.962 ], 00:18:12.962 "product_name": "Malloc disk", 00:18:12.962 "block_size": 4128, 00:18:12.962 "num_blocks": 8192, 00:18:12.962 "uuid": "7b3ba9e4-aee5-4e80-b4ac-c93b2dc9c087", 00:18:12.962 "md_size": 32, 00:18:12.962 "md_interleave": true, 00:18:12.962 "dif_type": 0, 00:18:12.962 "assigned_rate_limits": { 00:18:12.962 "rw_ios_per_sec": 0, 00:18:12.962 "rw_mbytes_per_sec": 0, 00:18:12.962 "r_mbytes_per_sec": 0, 00:18:12.962 "w_mbytes_per_sec": 0 00:18:12.962 }, 00:18:12.962 "claimed": true, 00:18:12.962 "claim_type": "exclusive_write", 00:18:12.962 "zoned": false, 00:18:12.962 "supported_io_types": { 00:18:12.962 "read": true, 00:18:12.962 "write": true, 00:18:12.962 "unmap": true, 00:18:12.962 "flush": true, 00:18:12.962 "reset": true, 00:18:12.962 "nvme_admin": false, 00:18:12.962 "nvme_io": false, 00:18:12.962 "nvme_io_md": false, 00:18:12.962 "write_zeroes": true, 00:18:12.962 "zcopy": true, 00:18:12.962 "get_zone_info": false, 00:18:12.962 "zone_management": false, 00:18:12.962 "zone_append": false, 00:18:12.962 "compare": false, 00:18:12.962 "compare_and_write": false, 00:18:12.962 "abort": true, 00:18:12.962 "seek_hole": false, 00:18:12.962 "seek_data": false, 00:18:12.962 "copy": true, 00:18:12.962 "nvme_iov_md": false 00:18:12.962 }, 00:18:12.962 "memory_domains": [ 00:18:12.962 { 00:18:12.962 "dma_device_id": "system", 00:18:12.962 "dma_device_type": 1 00:18:12.962 }, 00:18:12.962 { 00:18:12.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.962 "dma_device_type": 2 00:18:12.962 } 00:18:12.962 ], 00:18:12.962 "driver_specific": {} 00:18:12.962 } 00:18:12.962 ] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.962 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.962 "name": "Existed_Raid", 00:18:12.962 "uuid": "c002e596-b1fe-4f30-9233-207ec8e41969", 00:18:12.962 "strip_size_kb": 0, 00:18:12.962 "state": "online", 00:18:12.962 "raid_level": "raid1", 00:18:12.962 "superblock": true, 00:18:12.962 "num_base_bdevs": 2, 00:18:12.962 "num_base_bdevs_discovered": 2, 00:18:12.962 "num_base_bdevs_operational": 2, 00:18:12.962 "base_bdevs_list": [ 00:18:12.962 { 00:18:12.962 "name": "BaseBdev1", 00:18:12.962 "uuid": "8be46d06-1962-4a54-828d-562726531220", 00:18:12.962 "is_configured": true, 00:18:12.962 "data_offset": 256, 00:18:12.962 "data_size": 7936 00:18:12.962 }, 00:18:12.962 { 00:18:12.962 "name": "BaseBdev2", 00:18:12.962 "uuid": "7b3ba9e4-aee5-4e80-b4ac-c93b2dc9c087", 00:18:12.962 "is_configured": true, 00:18:12.962 "data_offset": 256, 00:18:12.962 "data_size": 7936 00:18:12.962 } 00:18:12.962 ] 00:18:12.962 }' 00:18:12.963 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.963 09:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.531 [2024-12-12 09:31:47.261660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.531 "name": "Existed_Raid", 00:18:13.531 "aliases": [ 00:18:13.531 "c002e596-b1fe-4f30-9233-207ec8e41969" 00:18:13.531 ], 00:18:13.531 "product_name": "Raid Volume", 00:18:13.531 "block_size": 4128, 00:18:13.531 "num_blocks": 7936, 00:18:13.531 "uuid": "c002e596-b1fe-4f30-9233-207ec8e41969", 00:18:13.531 "md_size": 32, 00:18:13.531 "md_interleave": true, 00:18:13.531 "dif_type": 0, 00:18:13.531 "assigned_rate_limits": { 00:18:13.531 "rw_ios_per_sec": 0, 00:18:13.531 "rw_mbytes_per_sec": 0, 00:18:13.531 "r_mbytes_per_sec": 0, 00:18:13.531 "w_mbytes_per_sec": 0 00:18:13.531 }, 00:18:13.531 "claimed": false, 00:18:13.531 "zoned": false, 00:18:13.531 "supported_io_types": { 00:18:13.531 "read": true, 00:18:13.531 "write": true, 00:18:13.531 "unmap": false, 00:18:13.531 "flush": false, 00:18:13.531 "reset": true, 00:18:13.531 "nvme_admin": false, 00:18:13.531 "nvme_io": false, 00:18:13.531 "nvme_io_md": false, 00:18:13.531 "write_zeroes": true, 00:18:13.531 "zcopy": false, 00:18:13.531 "get_zone_info": false, 00:18:13.531 "zone_management": false, 00:18:13.531 "zone_append": false, 00:18:13.531 "compare": false, 00:18:13.531 "compare_and_write": false, 00:18:13.531 "abort": false, 00:18:13.531 "seek_hole": false, 00:18:13.531 "seek_data": false, 00:18:13.531 "copy": false, 00:18:13.531 "nvme_iov_md": false 00:18:13.531 }, 00:18:13.531 "memory_domains": [ 00:18:13.531 { 00:18:13.531 "dma_device_id": "system", 00:18:13.531 "dma_device_type": 1 00:18:13.531 }, 00:18:13.531 { 00:18:13.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.531 "dma_device_type": 2 00:18:13.531 }, 00:18:13.531 { 00:18:13.531 "dma_device_id": "system", 00:18:13.531 "dma_device_type": 1 00:18:13.531 }, 00:18:13.531 { 00:18:13.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.531 "dma_device_type": 2 00:18:13.531 } 00:18:13.531 ], 00:18:13.531 "driver_specific": { 00:18:13.531 "raid": { 00:18:13.531 "uuid": "c002e596-b1fe-4f30-9233-207ec8e41969", 00:18:13.531 "strip_size_kb": 0, 00:18:13.531 "state": "online", 00:18:13.531 "raid_level": "raid1", 00:18:13.531 "superblock": true, 00:18:13.531 "num_base_bdevs": 2, 00:18:13.531 "num_base_bdevs_discovered": 2, 00:18:13.531 "num_base_bdevs_operational": 2, 00:18:13.531 "base_bdevs_list": [ 00:18:13.531 { 00:18:13.531 "name": "BaseBdev1", 00:18:13.531 "uuid": "8be46d06-1962-4a54-828d-562726531220", 00:18:13.531 "is_configured": true, 00:18:13.531 "data_offset": 256, 00:18:13.531 "data_size": 7936 00:18:13.531 }, 00:18:13.531 { 00:18:13.531 "name": "BaseBdev2", 00:18:13.531 "uuid": "7b3ba9e4-aee5-4e80-b4ac-c93b2dc9c087", 00:18:13.531 "is_configured": true, 00:18:13.531 "data_offset": 256, 00:18:13.531 "data_size": 7936 00:18:13.531 } 00:18:13.531 ] 00:18:13.531 } 00:18:13.531 } 00:18:13.531 }' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:13.531 BaseBdev2' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.531 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.531 [2024-12-12 09:31:47.497052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.790 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.791 "name": "Existed_Raid", 00:18:13.791 "uuid": "c002e596-b1fe-4f30-9233-207ec8e41969", 00:18:13.791 "strip_size_kb": 0, 00:18:13.791 "state": "online", 00:18:13.791 "raid_level": "raid1", 00:18:13.791 "superblock": true, 00:18:13.791 "num_base_bdevs": 2, 00:18:13.791 "num_base_bdevs_discovered": 1, 00:18:13.791 "num_base_bdevs_operational": 1, 00:18:13.791 "base_bdevs_list": [ 00:18:13.791 { 00:18:13.791 "name": null, 00:18:13.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.791 "is_configured": false, 00:18:13.791 "data_offset": 0, 00:18:13.791 "data_size": 7936 00:18:13.791 }, 00:18:13.791 { 00:18:13.791 "name": "BaseBdev2", 00:18:13.791 "uuid": "7b3ba9e4-aee5-4e80-b4ac-c93b2dc9c087", 00:18:13.791 "is_configured": true, 00:18:13.791 "data_offset": 256, 00:18:13.791 "data_size": 7936 00:18:13.791 } 00:18:13.791 ] 00:18:13.791 }' 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.791 09:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.050 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:14.050 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.309 [2024-12-12 09:31:48.120119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:14.309 [2024-12-12 09:31:48.120404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.309 [2024-12-12 09:31:48.232456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.309 [2024-12-12 09:31:48.232665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.309 [2024-12-12 09:31:48.232697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89652 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89652 ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89652 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89652 00:18:14.309 killing process with pid 89652 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89652' 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89652 00:18:14.309 09:31:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89652 00:18:14.309 [2024-12-12 09:31:48.326250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.569 [2024-12-12 09:31:48.345468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.947 09:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:15.947 00:18:15.947 real 0m5.426s 00:18:15.947 user 0m7.626s 00:18:15.947 sys 0m1.042s 00:18:15.947 09:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.947 09:31:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.947 ************************************ 00:18:15.947 END TEST raid_state_function_test_sb_md_interleaved 00:18:15.947 ************************************ 00:18:15.947 09:31:49 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:15.947 09:31:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:15.947 09:31:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.947 09:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.947 ************************************ 00:18:15.947 START TEST raid_superblock_test_md_interleaved 00:18:15.947 ************************************ 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89901 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89901 00:18:15.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89901 ']' 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.947 09:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.947 [2024-12-12 09:31:49.810194] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:15.947 [2024-12-12 09:31:49.810355] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89901 ] 00:18:16.207 [2024-12-12 09:31:49.992347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.207 [2024-12-12 09:31:50.139533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.466 [2024-12-12 09:31:50.395013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.466 [2024-12-12 09:31:50.395107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.725 malloc1 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.725 [2024-12-12 09:31:50.736427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:16.725 [2024-12-12 09:31:50.736598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.725 [2024-12-12 09:31:50.736639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.725 [2024-12-12 09:31:50.736653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.725 [2024-12-12 09:31:50.739319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.725 [2024-12-12 09:31:50.739361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:16.725 pt1 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.725 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 malloc2 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 [2024-12-12 09:31:50.802585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.985 [2024-12-12 09:31:50.802739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.985 [2024-12-12 09:31:50.802810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.985 [2024-12-12 09:31:50.802883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.985 [2024-12-12 09:31:50.805374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.985 [2024-12-12 09:31:50.805453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.985 pt2 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 [2024-12-12 09:31:50.810601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:16.985 [2024-12-12 09:31:50.812910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.985 [2024-12-12 09:31:50.813193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.985 [2024-12-12 09:31:50.813244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:16.985 [2024-12-12 09:31:50.813368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:16.985 [2024-12-12 09:31:50.813496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.985 [2024-12-12 09:31:50.813542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.985 [2024-12-12 09:31:50.813677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.985 "name": "raid_bdev1", 00:18:16.985 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:16.985 "strip_size_kb": 0, 00:18:16.985 "state": "online", 00:18:16.985 "raid_level": "raid1", 00:18:16.985 "superblock": true, 00:18:16.985 "num_base_bdevs": 2, 00:18:16.985 "num_base_bdevs_discovered": 2, 00:18:16.985 "num_base_bdevs_operational": 2, 00:18:16.985 "base_bdevs_list": [ 00:18:16.985 { 00:18:16.985 "name": "pt1", 00:18:16.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:16.985 "is_configured": true, 00:18:16.985 "data_offset": 256, 00:18:16.985 "data_size": 7936 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "name": "pt2", 00:18:16.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.985 "is_configured": true, 00:18:16.985 "data_offset": 256, 00:18:16.985 "data_size": 7936 00:18:16.985 } 00:18:16.985 ] 00:18:16.985 }' 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.985 09:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.245 [2024-12-12 09:31:51.238279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.245 "name": "raid_bdev1", 00:18:17.245 "aliases": [ 00:18:17.245 "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7" 00:18:17.245 ], 00:18:17.245 "product_name": "Raid Volume", 00:18:17.245 "block_size": 4128, 00:18:17.245 "num_blocks": 7936, 00:18:17.245 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:17.245 "md_size": 32, 00:18:17.245 "md_interleave": true, 00:18:17.245 "dif_type": 0, 00:18:17.245 "assigned_rate_limits": { 00:18:17.245 "rw_ios_per_sec": 0, 00:18:17.245 "rw_mbytes_per_sec": 0, 00:18:17.245 "r_mbytes_per_sec": 0, 00:18:17.245 "w_mbytes_per_sec": 0 00:18:17.245 }, 00:18:17.245 "claimed": false, 00:18:17.245 "zoned": false, 00:18:17.245 "supported_io_types": { 00:18:17.245 "read": true, 00:18:17.245 "write": true, 00:18:17.245 "unmap": false, 00:18:17.245 "flush": false, 00:18:17.245 "reset": true, 00:18:17.245 "nvme_admin": false, 00:18:17.245 "nvme_io": false, 00:18:17.245 "nvme_io_md": false, 00:18:17.245 "write_zeroes": true, 00:18:17.245 "zcopy": false, 00:18:17.245 "get_zone_info": false, 00:18:17.245 "zone_management": false, 00:18:17.245 "zone_append": false, 00:18:17.245 "compare": false, 00:18:17.245 "compare_and_write": false, 00:18:17.245 "abort": false, 00:18:17.245 "seek_hole": false, 00:18:17.245 "seek_data": false, 00:18:17.245 "copy": false, 00:18:17.245 "nvme_iov_md": false 00:18:17.245 }, 00:18:17.245 "memory_domains": [ 00:18:17.245 { 00:18:17.245 "dma_device_id": "system", 00:18:17.245 "dma_device_type": 1 00:18:17.245 }, 00:18:17.245 { 00:18:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.245 "dma_device_type": 2 00:18:17.245 }, 00:18:17.245 { 00:18:17.245 "dma_device_id": "system", 00:18:17.245 "dma_device_type": 1 00:18:17.245 }, 00:18:17.245 { 00:18:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.245 "dma_device_type": 2 00:18:17.245 } 00:18:17.245 ], 00:18:17.245 "driver_specific": { 00:18:17.245 "raid": { 00:18:17.245 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:17.245 "strip_size_kb": 0, 00:18:17.245 "state": "online", 00:18:17.245 "raid_level": "raid1", 00:18:17.245 "superblock": true, 00:18:17.245 "num_base_bdevs": 2, 00:18:17.245 "num_base_bdevs_discovered": 2, 00:18:17.245 "num_base_bdevs_operational": 2, 00:18:17.245 "base_bdevs_list": [ 00:18:17.245 { 00:18:17.245 "name": "pt1", 00:18:17.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.245 "is_configured": true, 00:18:17.245 "data_offset": 256, 00:18:17.245 "data_size": 7936 00:18:17.245 }, 00:18:17.245 { 00:18:17.245 "name": "pt2", 00:18:17.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.245 "is_configured": true, 00:18:17.245 "data_offset": 256, 00:18:17.245 "data_size": 7936 00:18:17.245 } 00:18:17.245 ] 00:18:17.245 } 00:18:17.245 } 00:18:17.245 }' 00:18:17.245 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:17.505 pt2' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 [2024-12-12 09:31:51.449837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 ']' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 [2024-12-12 09:31:51.493420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.505 [2024-12-12 09:31:51.493507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.505 [2024-12-12 09:31:51.493683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.505 [2024-12-12 09:31:51.493794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.505 [2024-12-12 09:31:51.493851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 [2024-12-12 09:31:51.609266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:17.766 [2024-12-12 09:31:51.611725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:17.766 [2024-12-12 09:31:51.611839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:17.766 [2024-12-12 09:31:51.611911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:17.766 [2024-12-12 09:31:51.611928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.766 [2024-12-12 09:31:51.611940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:17.766 request: 00:18:17.766 { 00:18:17.766 "name": "raid_bdev1", 00:18:17.766 "raid_level": "raid1", 00:18:17.766 "base_bdevs": [ 00:18:17.766 "malloc1", 00:18:17.766 "malloc2" 00:18:17.766 ], 00:18:17.766 "superblock": false, 00:18:17.766 "method": "bdev_raid_create", 00:18:17.766 "req_id": 1 00:18:17.766 } 00:18:17.766 Got JSON-RPC error response 00:18:17.766 response: 00:18:17.766 { 00:18:17.766 "code": -17, 00:18:17.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:17.766 } 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 [2024-12-12 09:31:51.669134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.766 [2024-12-12 09:31:51.669288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.766 [2024-12-12 09:31:51.669333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:17.766 [2024-12-12 09:31:51.669367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.766 [2024-12-12 09:31:51.671947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.766 [2024-12-12 09:31:51.672002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.766 [2024-12-12 09:31:51.672079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:17.766 [2024-12-12 09:31:51.672152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.766 pt1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.766 "name": "raid_bdev1", 00:18:17.766 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:17.766 "strip_size_kb": 0, 00:18:17.766 "state": "configuring", 00:18:17.766 "raid_level": "raid1", 00:18:17.766 "superblock": true, 00:18:17.766 "num_base_bdevs": 2, 00:18:17.766 "num_base_bdevs_discovered": 1, 00:18:17.766 "num_base_bdevs_operational": 2, 00:18:17.766 "base_bdevs_list": [ 00:18:17.766 { 00:18:17.766 "name": "pt1", 00:18:17.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.766 "is_configured": true, 00:18:17.766 "data_offset": 256, 00:18:17.766 "data_size": 7936 00:18:17.766 }, 00:18:17.766 { 00:18:17.766 "name": null, 00:18:17.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.766 "is_configured": false, 00:18:17.766 "data_offset": 256, 00:18:17.766 "data_size": 7936 00:18:17.766 } 00:18:17.766 ] 00:18:17.766 }' 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.766 09:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 [2024-12-12 09:31:52.100381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.335 [2024-12-12 09:31:52.100532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.335 [2024-12-12 09:31:52.100578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:18.335 [2024-12-12 09:31:52.100612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.335 [2024-12-12 09:31:52.100888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.335 [2024-12-12 09:31:52.100947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.335 [2024-12-12 09:31:52.101058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:18.335 [2024-12-12 09:31:52.101131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.335 [2024-12-12 09:31:52.101267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:18.335 [2024-12-12 09:31:52.101308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:18.335 [2024-12-12 09:31:52.101412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:18.335 [2024-12-12 09:31:52.101522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:18.335 [2024-12-12 09:31:52.101557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:18.335 [2024-12-12 09:31:52.101671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.335 pt2 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.335 "name": "raid_bdev1", 00:18:18.335 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:18.335 "strip_size_kb": 0, 00:18:18.335 "state": "online", 00:18:18.335 "raid_level": "raid1", 00:18:18.335 "superblock": true, 00:18:18.335 "num_base_bdevs": 2, 00:18:18.335 "num_base_bdevs_discovered": 2, 00:18:18.335 "num_base_bdevs_operational": 2, 00:18:18.335 "base_bdevs_list": [ 00:18:18.335 { 00:18:18.335 "name": "pt1", 00:18:18.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.335 "is_configured": true, 00:18:18.335 "data_offset": 256, 00:18:18.335 "data_size": 7936 00:18:18.335 }, 00:18:18.335 { 00:18:18.335 "name": "pt2", 00:18:18.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.335 "is_configured": true, 00:18:18.335 "data_offset": 256, 00:18:18.335 "data_size": 7936 00:18:18.335 } 00:18:18.335 ] 00:18:18.335 }' 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.335 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.595 [2024-12-12 09:31:52.564009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.595 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.595 "name": "raid_bdev1", 00:18:18.595 "aliases": [ 00:18:18.595 "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7" 00:18:18.595 ], 00:18:18.595 "product_name": "Raid Volume", 00:18:18.595 "block_size": 4128, 00:18:18.595 "num_blocks": 7936, 00:18:18.595 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:18.595 "md_size": 32, 00:18:18.595 "md_interleave": true, 00:18:18.595 "dif_type": 0, 00:18:18.595 "assigned_rate_limits": { 00:18:18.595 "rw_ios_per_sec": 0, 00:18:18.595 "rw_mbytes_per_sec": 0, 00:18:18.595 "r_mbytes_per_sec": 0, 00:18:18.595 "w_mbytes_per_sec": 0 00:18:18.595 }, 00:18:18.595 "claimed": false, 00:18:18.595 "zoned": false, 00:18:18.595 "supported_io_types": { 00:18:18.595 "read": true, 00:18:18.595 "write": true, 00:18:18.595 "unmap": false, 00:18:18.595 "flush": false, 00:18:18.595 "reset": true, 00:18:18.595 "nvme_admin": false, 00:18:18.595 "nvme_io": false, 00:18:18.595 "nvme_io_md": false, 00:18:18.595 "write_zeroes": true, 00:18:18.595 "zcopy": false, 00:18:18.595 "get_zone_info": false, 00:18:18.595 "zone_management": false, 00:18:18.595 "zone_append": false, 00:18:18.595 "compare": false, 00:18:18.595 "compare_and_write": false, 00:18:18.595 "abort": false, 00:18:18.595 "seek_hole": false, 00:18:18.595 "seek_data": false, 00:18:18.595 "copy": false, 00:18:18.595 "nvme_iov_md": false 00:18:18.595 }, 00:18:18.595 "memory_domains": [ 00:18:18.595 { 00:18:18.595 "dma_device_id": "system", 00:18:18.595 "dma_device_type": 1 00:18:18.595 }, 00:18:18.595 { 00:18:18.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.595 "dma_device_type": 2 00:18:18.595 }, 00:18:18.595 { 00:18:18.595 "dma_device_id": "system", 00:18:18.595 "dma_device_type": 1 00:18:18.595 }, 00:18:18.595 { 00:18:18.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.595 "dma_device_type": 2 00:18:18.595 } 00:18:18.595 ], 00:18:18.595 "driver_specific": { 00:18:18.595 "raid": { 00:18:18.595 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:18.595 "strip_size_kb": 0, 00:18:18.595 "state": "online", 00:18:18.595 "raid_level": "raid1", 00:18:18.595 "superblock": true, 00:18:18.595 "num_base_bdevs": 2, 00:18:18.595 "num_base_bdevs_discovered": 2, 00:18:18.595 "num_base_bdevs_operational": 2, 00:18:18.595 "base_bdevs_list": [ 00:18:18.595 { 00:18:18.595 "name": "pt1", 00:18:18.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.595 "is_configured": true, 00:18:18.595 "data_offset": 256, 00:18:18.595 "data_size": 7936 00:18:18.595 }, 00:18:18.595 { 00:18:18.595 "name": "pt2", 00:18:18.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.595 "is_configured": true, 00:18:18.595 "data_offset": 256, 00:18:18.595 "data_size": 7936 00:18:18.595 } 00:18:18.595 ] 00:18:18.595 } 00:18:18.596 } 00:18:18.596 }' 00:18:18.596 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:18.856 pt2' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.856 [2024-12-12 09:31:52.819565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 '!=' ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 ']' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.856 [2024-12-12 09:31:52.863240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.856 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.116 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.116 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.116 "name": "raid_bdev1", 00:18:19.116 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:19.116 "strip_size_kb": 0, 00:18:19.116 "state": "online", 00:18:19.116 "raid_level": "raid1", 00:18:19.116 "superblock": true, 00:18:19.116 "num_base_bdevs": 2, 00:18:19.116 "num_base_bdevs_discovered": 1, 00:18:19.116 "num_base_bdevs_operational": 1, 00:18:19.116 "base_bdevs_list": [ 00:18:19.116 { 00:18:19.116 "name": null, 00:18:19.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.116 "is_configured": false, 00:18:19.116 "data_offset": 0, 00:18:19.116 "data_size": 7936 00:18:19.116 }, 00:18:19.116 { 00:18:19.116 "name": "pt2", 00:18:19.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.116 "is_configured": true, 00:18:19.116 "data_offset": 256, 00:18:19.116 "data_size": 7936 00:18:19.116 } 00:18:19.116 ] 00:18:19.116 }' 00:18:19.116 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.116 09:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 [2024-12-12 09:31:53.330340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.378 [2024-12-12 09:31:53.330440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.378 [2024-12-12 09:31:53.330586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.378 [2024-12-12 09:31:53.330683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.378 [2024-12-12 09:31:53.330736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.378 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 [2024-12-12 09:31:53.398239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.378 [2024-12-12 09:31:53.398367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.378 [2024-12-12 09:31:53.398394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:19.378 [2024-12-12 09:31:53.398407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.638 [2024-12-12 09:31:53.401005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.638 [2024-12-12 09:31:53.401048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.638 [2024-12-12 09:31:53.401117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:19.638 [2024-12-12 09:31:53.401180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.638 [2024-12-12 09:31:53.401269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:19.638 [2024-12-12 09:31:53.401282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:19.638 [2024-12-12 09:31:53.401395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:19.638 [2024-12-12 09:31:53.401486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:19.638 [2024-12-12 09:31:53.401494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:19.638 [2024-12-12 09:31:53.401575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.638 pt2 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.639 "name": "raid_bdev1", 00:18:19.639 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:19.639 "strip_size_kb": 0, 00:18:19.639 "state": "online", 00:18:19.639 "raid_level": "raid1", 00:18:19.639 "superblock": true, 00:18:19.639 "num_base_bdevs": 2, 00:18:19.639 "num_base_bdevs_discovered": 1, 00:18:19.639 "num_base_bdevs_operational": 1, 00:18:19.639 "base_bdevs_list": [ 00:18:19.639 { 00:18:19.639 "name": null, 00:18:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.639 "is_configured": false, 00:18:19.639 "data_offset": 256, 00:18:19.639 "data_size": 7936 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "name": "pt2", 00:18:19.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.639 "is_configured": true, 00:18:19.639 "data_offset": 256, 00:18:19.639 "data_size": 7936 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }' 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.639 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.899 [2024-12-12 09:31:53.853436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.899 [2024-12-12 09:31:53.853543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.899 [2024-12-12 09:31:53.853676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.899 [2024-12-12 09:31:53.853748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.899 [2024-12-12 09:31:53.853760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.899 [2024-12-12 09:31:53.913405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.899 [2024-12-12 09:31:53.913551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.899 [2024-12-12 09:31:53.913630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:19.899 [2024-12-12 09:31:53.913670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.899 [2024-12-12 09:31:53.916354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.899 [2024-12-12 09:31:53.916446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.899 [2024-12-12 09:31:53.916555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.899 [2024-12-12 09:31:53.916663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.899 [2024-12-12 09:31:53.916849] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:19.899 [2024-12-12 09:31:53.916909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.899 [2024-12-12 09:31:53.916953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:19.899 [2024-12-12 09:31:53.917087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.899 [2024-12-12 09:31:53.917223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:19.899 [2024-12-12 09:31:53.917265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:19.899 [2024-12-12 09:31:53.917390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:19.899 [2024-12-12 09:31:53.917500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:19.899 pt1 00:18:19.899 [2024-12-12 09:31:53.917542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:19.899 [2024-12-12 09:31:53.917682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.899 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.159 "name": "raid_bdev1", 00:18:20.159 "uuid": "ecc58d84-a7e4-4c91-8980-b44f0e13b8a7", 00:18:20.159 "strip_size_kb": 0, 00:18:20.159 "state": "online", 00:18:20.159 "raid_level": "raid1", 00:18:20.159 "superblock": true, 00:18:20.159 "num_base_bdevs": 2, 00:18:20.159 "num_base_bdevs_discovered": 1, 00:18:20.159 "num_base_bdevs_operational": 1, 00:18:20.159 "base_bdevs_list": [ 00:18:20.159 { 00:18:20.159 "name": null, 00:18:20.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.159 "is_configured": false, 00:18:20.159 "data_offset": 256, 00:18:20.159 "data_size": 7936 00:18:20.159 }, 00:18:20.159 { 00:18:20.159 "name": "pt2", 00:18:20.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.159 "is_configured": true, 00:18:20.159 "data_offset": 256, 00:18:20.159 "data_size": 7936 00:18:20.159 } 00:18:20.159 ] 00:18:20.159 }' 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.159 09:31:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 [2024-12-12 09:31:54.336979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 '!=' ecc58d84-a7e4-4c91-8980-b44f0e13b8a7 ']' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89901 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89901 ']' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89901 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89901 00:18:20.419 killing process with pid 89901 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89901' 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89901 00:18:20.419 09:31:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89901 00:18:20.419 [2024-12-12 09:31:54.399184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.419 [2024-12-12 09:31:54.399309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.419 [2024-12-12 09:31:54.399374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.419 [2024-12-12 09:31:54.399397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:20.678 [2024-12-12 09:31:54.640264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.057 ************************************ 00:18:22.057 END TEST raid_superblock_test_md_interleaved 00:18:22.057 ************************************ 00:18:22.057 09:31:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:22.057 00:18:22.057 real 0m6.210s 00:18:22.057 user 0m9.139s 00:18:22.057 sys 0m1.227s 00:18:22.057 09:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.057 09:31:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.057 09:31:55 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:22.057 09:31:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:22.057 09:31:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.057 09:31:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.057 ************************************ 00:18:22.057 START TEST raid_rebuild_test_sb_md_interleaved 00:18:22.057 ************************************ 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:22.057 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90230 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90230 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90230 ']' 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.058 09:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:22.318 [2024-12-12 09:31:56.089559] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:22.318 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:22.318 Zero copy mechanism will not be used. 00:18:22.318 [2024-12-12 09:31:56.089821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90230 ] 00:18:22.318 [2024-12-12 09:31:56.271569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.577 [2024-12-12 09:31:56.418836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.836 [2024-12-12 09:31:56.668755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.836 [2024-12-12 09:31:56.668853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.096 09:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.096 BaseBdev1_malloc 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.096 [2024-12-12 09:31:57.033419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:23.096 [2024-12-12 09:31:57.033590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.096 [2024-12-12 09:31:57.033649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:23.096 [2024-12-12 09:31:57.033701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.096 [2024-12-12 09:31:57.036411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.096 [2024-12-12 09:31:57.036520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.096 BaseBdev1 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.096 BaseBdev2_malloc 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.096 [2024-12-12 09:31:57.098208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:23.096 [2024-12-12 09:31:57.098385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.096 [2024-12-12 09:31:57.098435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:23.096 [2024-12-12 09:31:57.098478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.096 [2024-12-12 09:31:57.100849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.096 [2024-12-12 09:31:57.100940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:23.096 BaseBdev2 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.096 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.356 spare_malloc 00:18:23.356 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 spare_delay 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 [2024-12-12 09:31:57.183548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.357 [2024-12-12 09:31:57.183724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.357 [2024-12-12 09:31:57.183786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:23.357 [2024-12-12 09:31:57.183835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.357 [2024-12-12 09:31:57.186323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.357 [2024-12-12 09:31:57.186404] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.357 spare 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 [2024-12-12 09:31:57.195584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.357 [2024-12-12 09:31:57.197876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.357 [2024-12-12 09:31:57.198166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:23.357 [2024-12-12 09:31:57.198222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:23.357 [2024-12-12 09:31:57.198352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:23.357 [2024-12-12 09:31:57.198476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:23.357 [2024-12-12 09:31:57.198514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:23.357 [2024-12-12 09:31:57.198660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.357 "name": "raid_bdev1", 00:18:23.357 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:23.357 "strip_size_kb": 0, 00:18:23.357 "state": "online", 00:18:23.357 "raid_level": "raid1", 00:18:23.357 "superblock": true, 00:18:23.357 "num_base_bdevs": 2, 00:18:23.357 "num_base_bdevs_discovered": 2, 00:18:23.357 "num_base_bdevs_operational": 2, 00:18:23.357 "base_bdevs_list": [ 00:18:23.357 { 00:18:23.357 "name": "BaseBdev1", 00:18:23.357 "uuid": "f3aa7745-237a-5feb-8000-84e860cc4598", 00:18:23.357 "is_configured": true, 00:18:23.357 "data_offset": 256, 00:18:23.357 "data_size": 7936 00:18:23.357 }, 00:18:23.357 { 00:18:23.357 "name": "BaseBdev2", 00:18:23.357 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:23.357 "is_configured": true, 00:18:23.357 "data_offset": 256, 00:18:23.357 "data_size": 7936 00:18:23.357 } 00:18:23.357 ] 00:18:23.357 }' 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.357 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.927 [2024-12-12 09:31:57.671180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.927 [2024-12-12 09:31:57.766671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.927 "name": "raid_bdev1", 00:18:23.927 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:23.927 "strip_size_kb": 0, 00:18:23.927 "state": "online", 00:18:23.927 "raid_level": "raid1", 00:18:23.927 "superblock": true, 00:18:23.927 "num_base_bdevs": 2, 00:18:23.927 "num_base_bdevs_discovered": 1, 00:18:23.927 "num_base_bdevs_operational": 1, 00:18:23.927 "base_bdevs_list": [ 00:18:23.927 { 00:18:23.927 "name": null, 00:18:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.927 "is_configured": false, 00:18:23.927 "data_offset": 0, 00:18:23.927 "data_size": 7936 00:18:23.927 }, 00:18:23.927 { 00:18:23.927 "name": "BaseBdev2", 00:18:23.927 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:23.927 "is_configured": true, 00:18:23.927 "data_offset": 256, 00:18:23.927 "data_size": 7936 00:18:23.927 } 00:18:23.927 ] 00:18:23.927 }' 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.927 09:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.496 09:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.496 09:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.496 09:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.496 [2024-12-12 09:31:58.269820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.496 [2024-12-12 09:31:58.291118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:24.496 09:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.496 09:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:24.496 [2024-12-12 09:31:58.293793] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.435 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.435 "name": "raid_bdev1", 00:18:25.435 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:25.435 "strip_size_kb": 0, 00:18:25.435 "state": "online", 00:18:25.435 "raid_level": "raid1", 00:18:25.435 "superblock": true, 00:18:25.435 "num_base_bdevs": 2, 00:18:25.435 "num_base_bdevs_discovered": 2, 00:18:25.435 "num_base_bdevs_operational": 2, 00:18:25.435 "process": { 00:18:25.435 "type": "rebuild", 00:18:25.435 "target": "spare", 00:18:25.435 "progress": { 00:18:25.435 "blocks": 2560, 00:18:25.435 "percent": 32 00:18:25.435 } 00:18:25.435 }, 00:18:25.435 "base_bdevs_list": [ 00:18:25.435 { 00:18:25.435 "name": "spare", 00:18:25.435 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:25.435 "is_configured": true, 00:18:25.435 "data_offset": 256, 00:18:25.435 "data_size": 7936 00:18:25.435 }, 00:18:25.435 { 00:18:25.435 "name": "BaseBdev2", 00:18:25.436 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:25.436 "is_configured": true, 00:18:25.436 "data_offset": 256, 00:18:25.436 "data_size": 7936 00:18:25.436 } 00:18:25.436 ] 00:18:25.436 }' 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.436 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.436 [2024-12-12 09:31:59.445109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.695 [2024-12-12 09:31:59.504980] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.695 [2024-12-12 09:31:59.505243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.695 [2024-12-12 09:31:59.505285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.695 [2024-12-12 09:31:59.505315] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.695 "name": "raid_bdev1", 00:18:25.695 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:25.695 "strip_size_kb": 0, 00:18:25.695 "state": "online", 00:18:25.695 "raid_level": "raid1", 00:18:25.695 "superblock": true, 00:18:25.695 "num_base_bdevs": 2, 00:18:25.695 "num_base_bdevs_discovered": 1, 00:18:25.695 "num_base_bdevs_operational": 1, 00:18:25.695 "base_bdevs_list": [ 00:18:25.695 { 00:18:25.695 "name": null, 00:18:25.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.695 "is_configured": false, 00:18:25.695 "data_offset": 0, 00:18:25.695 "data_size": 7936 00:18:25.695 }, 00:18:25.695 { 00:18:25.695 "name": "BaseBdev2", 00:18:25.695 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:25.695 "is_configured": true, 00:18:25.695 "data_offset": 256, 00:18:25.695 "data_size": 7936 00:18:25.695 } 00:18:25.695 ] 00:18:25.695 }' 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.695 09:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.264 "name": "raid_bdev1", 00:18:26.264 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:26.264 "strip_size_kb": 0, 00:18:26.264 "state": "online", 00:18:26.264 "raid_level": "raid1", 00:18:26.264 "superblock": true, 00:18:26.264 "num_base_bdevs": 2, 00:18:26.264 "num_base_bdevs_discovered": 1, 00:18:26.264 "num_base_bdevs_operational": 1, 00:18:26.264 "base_bdevs_list": [ 00:18:26.264 { 00:18:26.264 "name": null, 00:18:26.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.264 "is_configured": false, 00:18:26.264 "data_offset": 0, 00:18:26.264 "data_size": 7936 00:18:26.264 }, 00:18:26.264 { 00:18:26.264 "name": "BaseBdev2", 00:18:26.264 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:26.264 "is_configured": true, 00:18:26.264 "data_offset": 256, 00:18:26.264 "data_size": 7936 00:18:26.264 } 00:18:26.264 ] 00:18:26.264 }' 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 [2024-12-12 09:32:00.147675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.264 [2024-12-12 09:32:00.168727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.264 09:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:26.264 [2024-12-12 09:32:00.171271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.205 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.464 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.464 "name": "raid_bdev1", 00:18:27.464 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:27.464 "strip_size_kb": 0, 00:18:27.464 "state": "online", 00:18:27.464 "raid_level": "raid1", 00:18:27.464 "superblock": true, 00:18:27.464 "num_base_bdevs": 2, 00:18:27.464 "num_base_bdevs_discovered": 2, 00:18:27.464 "num_base_bdevs_operational": 2, 00:18:27.464 "process": { 00:18:27.464 "type": "rebuild", 00:18:27.464 "target": "spare", 00:18:27.464 "progress": { 00:18:27.464 "blocks": 2560, 00:18:27.464 "percent": 32 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 "base_bdevs_list": [ 00:18:27.465 { 00:18:27.465 "name": "spare", 00:18:27.465 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:27.465 "is_configured": true, 00:18:27.465 "data_offset": 256, 00:18:27.465 "data_size": 7936 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "name": "BaseBdev2", 00:18:27.465 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:27.465 "is_configured": true, 00:18:27.465 "data_offset": 256, 00:18:27.465 "data_size": 7936 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:27.465 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.465 "name": "raid_bdev1", 00:18:27.465 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:27.465 "strip_size_kb": 0, 00:18:27.465 "state": "online", 00:18:27.465 "raid_level": "raid1", 00:18:27.465 "superblock": true, 00:18:27.465 "num_base_bdevs": 2, 00:18:27.465 "num_base_bdevs_discovered": 2, 00:18:27.465 "num_base_bdevs_operational": 2, 00:18:27.465 "process": { 00:18:27.465 "type": "rebuild", 00:18:27.465 "target": "spare", 00:18:27.465 "progress": { 00:18:27.465 "blocks": 2816, 00:18:27.465 "percent": 35 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 "base_bdevs_list": [ 00:18:27.465 { 00:18:27.465 "name": "spare", 00:18:27.465 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:27.465 "is_configured": true, 00:18:27.465 "data_offset": 256, 00:18:27.465 "data_size": 7936 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "name": "BaseBdev2", 00:18:27.465 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:27.465 "is_configured": true, 00:18:27.465 "data_offset": 256, 00:18:27.465 "data_size": 7936 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.465 09:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.844 "name": "raid_bdev1", 00:18:28.844 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:28.844 "strip_size_kb": 0, 00:18:28.844 "state": "online", 00:18:28.844 "raid_level": "raid1", 00:18:28.844 "superblock": true, 00:18:28.844 "num_base_bdevs": 2, 00:18:28.844 "num_base_bdevs_discovered": 2, 00:18:28.844 "num_base_bdevs_operational": 2, 00:18:28.844 "process": { 00:18:28.844 "type": "rebuild", 00:18:28.844 "target": "spare", 00:18:28.844 "progress": { 00:18:28.844 "blocks": 5632, 00:18:28.844 "percent": 70 00:18:28.844 } 00:18:28.844 }, 00:18:28.844 "base_bdevs_list": [ 00:18:28.844 { 00:18:28.844 "name": "spare", 00:18:28.844 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:28.844 "is_configured": true, 00:18:28.844 "data_offset": 256, 00:18:28.844 "data_size": 7936 00:18:28.844 }, 00:18:28.844 { 00:18:28.844 "name": "BaseBdev2", 00:18:28.844 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:28.844 "is_configured": true, 00:18:28.844 "data_offset": 256, 00:18:28.844 "data_size": 7936 00:18:28.844 } 00:18:28.844 ] 00:18:28.844 }' 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.844 09:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.413 [2024-12-12 09:32:03.299468] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:29.413 [2024-12-12 09:32:03.299702] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:29.413 [2024-12-12 09:32:03.299943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.674 "name": "raid_bdev1", 00:18:29.674 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:29.674 "strip_size_kb": 0, 00:18:29.674 "state": "online", 00:18:29.674 "raid_level": "raid1", 00:18:29.674 "superblock": true, 00:18:29.674 "num_base_bdevs": 2, 00:18:29.674 "num_base_bdevs_discovered": 2, 00:18:29.674 "num_base_bdevs_operational": 2, 00:18:29.674 "base_bdevs_list": [ 00:18:29.674 { 00:18:29.674 "name": "spare", 00:18:29.674 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:29.674 "is_configured": true, 00:18:29.674 "data_offset": 256, 00:18:29.674 "data_size": 7936 00:18:29.674 }, 00:18:29.674 { 00:18:29.674 "name": "BaseBdev2", 00:18:29.674 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:29.674 "is_configured": true, 00:18:29.674 "data_offset": 256, 00:18:29.674 "data_size": 7936 00:18:29.674 } 00:18:29.674 ] 00:18:29.674 }' 00:18:29.674 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.934 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.935 "name": "raid_bdev1", 00:18:29.935 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:29.935 "strip_size_kb": 0, 00:18:29.935 "state": "online", 00:18:29.935 "raid_level": "raid1", 00:18:29.935 "superblock": true, 00:18:29.935 "num_base_bdevs": 2, 00:18:29.935 "num_base_bdevs_discovered": 2, 00:18:29.935 "num_base_bdevs_operational": 2, 00:18:29.935 "base_bdevs_list": [ 00:18:29.935 { 00:18:29.935 "name": "spare", 00:18:29.935 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:29.935 "is_configured": true, 00:18:29.935 "data_offset": 256, 00:18:29.935 "data_size": 7936 00:18:29.935 }, 00:18:29.935 { 00:18:29.935 "name": "BaseBdev2", 00:18:29.935 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:29.935 "is_configured": true, 00:18:29.935 "data_offset": 256, 00:18:29.935 "data_size": 7936 00:18:29.935 } 00:18:29.935 ] 00:18:29.935 }' 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.935 "name": "raid_bdev1", 00:18:29.935 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:29.935 "strip_size_kb": 0, 00:18:29.935 "state": "online", 00:18:29.935 "raid_level": "raid1", 00:18:29.935 "superblock": true, 00:18:29.935 "num_base_bdevs": 2, 00:18:29.935 "num_base_bdevs_discovered": 2, 00:18:29.935 "num_base_bdevs_operational": 2, 00:18:29.935 "base_bdevs_list": [ 00:18:29.935 { 00:18:29.935 "name": "spare", 00:18:29.935 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:29.935 "is_configured": true, 00:18:29.935 "data_offset": 256, 00:18:29.935 "data_size": 7936 00:18:29.935 }, 00:18:29.935 { 00:18:29.935 "name": "BaseBdev2", 00:18:29.935 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:29.935 "is_configured": true, 00:18:29.935 "data_offset": 256, 00:18:29.935 "data_size": 7936 00:18:29.935 } 00:18:29.935 ] 00:18:29.935 }' 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.935 09:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 [2024-12-12 09:32:04.262011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.251 [2024-12-12 09:32:04.262137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.251 [2024-12-12 09:32:04.262298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.251 [2024-12-12 09:32:04.262410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.251 [2024-12-12 09:32:04.262484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.251 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.511 [2024-12-12 09:32:04.329861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.511 [2024-12-12 09:32:04.330022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.511 [2024-12-12 09:32:04.330059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:30.511 [2024-12-12 09:32:04.330072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.511 [2024-12-12 09:32:04.332780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.511 [2024-12-12 09:32:04.332827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.511 [2024-12-12 09:32:04.332916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:30.511 [2024-12-12 09:32:04.333010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.511 [2024-12-12 09:32:04.333156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.511 spare 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.511 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.512 [2024-12-12 09:32:04.433085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:30.512 [2024-12-12 09:32:04.433308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:30.512 [2024-12-12 09:32:04.433527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:30.512 [2024-12-12 09:32:04.433749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:30.512 [2024-12-12 09:32:04.433804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:30.512 [2024-12-12 09:32:04.434034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.512 "name": "raid_bdev1", 00:18:30.512 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:30.512 "strip_size_kb": 0, 00:18:30.512 "state": "online", 00:18:30.512 "raid_level": "raid1", 00:18:30.512 "superblock": true, 00:18:30.512 "num_base_bdevs": 2, 00:18:30.512 "num_base_bdevs_discovered": 2, 00:18:30.512 "num_base_bdevs_operational": 2, 00:18:30.512 "base_bdevs_list": [ 00:18:30.512 { 00:18:30.512 "name": "spare", 00:18:30.512 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:30.512 "is_configured": true, 00:18:30.512 "data_offset": 256, 00:18:30.512 "data_size": 7936 00:18:30.512 }, 00:18:30.512 { 00:18:30.512 "name": "BaseBdev2", 00:18:30.512 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:30.512 "is_configured": true, 00:18:30.512 "data_offset": 256, 00:18:30.512 "data_size": 7936 00:18:30.512 } 00:18:30.512 ] 00:18:30.512 }' 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.512 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.083 "name": "raid_bdev1", 00:18:31.083 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:31.083 "strip_size_kb": 0, 00:18:31.083 "state": "online", 00:18:31.083 "raid_level": "raid1", 00:18:31.083 "superblock": true, 00:18:31.083 "num_base_bdevs": 2, 00:18:31.083 "num_base_bdevs_discovered": 2, 00:18:31.083 "num_base_bdevs_operational": 2, 00:18:31.083 "base_bdevs_list": [ 00:18:31.083 { 00:18:31.083 "name": "spare", 00:18:31.083 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:31.083 "is_configured": true, 00:18:31.083 "data_offset": 256, 00:18:31.083 "data_size": 7936 00:18:31.083 }, 00:18:31.083 { 00:18:31.083 "name": "BaseBdev2", 00:18:31.083 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:31.083 "is_configured": true, 00:18:31.083 "data_offset": 256, 00:18:31.083 "data_size": 7936 00:18:31.083 } 00:18:31.083 ] 00:18:31.083 }' 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.083 09:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.083 [2024-12-12 09:32:05.053143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.083 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.342 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.342 "name": "raid_bdev1", 00:18:31.342 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:31.342 "strip_size_kb": 0, 00:18:31.342 "state": "online", 00:18:31.342 "raid_level": "raid1", 00:18:31.342 "superblock": true, 00:18:31.342 "num_base_bdevs": 2, 00:18:31.342 "num_base_bdevs_discovered": 1, 00:18:31.342 "num_base_bdevs_operational": 1, 00:18:31.342 "base_bdevs_list": [ 00:18:31.342 { 00:18:31.342 "name": null, 00:18:31.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.342 "is_configured": false, 00:18:31.342 "data_offset": 0, 00:18:31.342 "data_size": 7936 00:18:31.342 }, 00:18:31.342 { 00:18:31.342 "name": "BaseBdev2", 00:18:31.342 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:31.342 "is_configured": true, 00:18:31.342 "data_offset": 256, 00:18:31.342 "data_size": 7936 00:18:31.342 } 00:18:31.342 ] 00:18:31.342 }' 00:18:31.342 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.342 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.601 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.601 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.601 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.601 [2024-12-12 09:32:05.504347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.601 [2024-12-12 09:32:05.504743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.601 [2024-12-12 09:32:05.504824] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:31.601 [2024-12-12 09:32:05.504904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.601 [2024-12-12 09:32:05.526169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:31.601 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.601 09:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:31.601 [2024-12-12 09:32:05.528757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.540 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.801 "name": "raid_bdev1", 00:18:32.801 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:32.801 "strip_size_kb": 0, 00:18:32.801 "state": "online", 00:18:32.801 "raid_level": "raid1", 00:18:32.801 "superblock": true, 00:18:32.801 "num_base_bdevs": 2, 00:18:32.801 "num_base_bdevs_discovered": 2, 00:18:32.801 "num_base_bdevs_operational": 2, 00:18:32.801 "process": { 00:18:32.801 "type": "rebuild", 00:18:32.801 "target": "spare", 00:18:32.801 "progress": { 00:18:32.801 "blocks": 2560, 00:18:32.801 "percent": 32 00:18:32.801 } 00:18:32.801 }, 00:18:32.801 "base_bdevs_list": [ 00:18:32.801 { 00:18:32.801 "name": "spare", 00:18:32.801 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:32.801 "is_configured": true, 00:18:32.801 "data_offset": 256, 00:18:32.801 "data_size": 7936 00:18:32.801 }, 00:18:32.801 { 00:18:32.801 "name": "BaseBdev2", 00:18:32.801 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:32.801 "is_configured": true, 00:18:32.801 "data_offset": 256, 00:18:32.801 "data_size": 7936 00:18:32.801 } 00:18:32.801 ] 00:18:32.801 }' 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.801 [2024-12-12 09:32:06.691660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.801 [2024-12-12 09:32:06.739448] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.801 [2024-12-12 09:32:06.739728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.801 [2024-12-12 09:32:06.739802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.801 [2024-12-12 09:32:06.739832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.801 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.061 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.061 "name": "raid_bdev1", 00:18:33.061 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:33.061 "strip_size_kb": 0, 00:18:33.061 "state": "online", 00:18:33.061 "raid_level": "raid1", 00:18:33.061 "superblock": true, 00:18:33.061 "num_base_bdevs": 2, 00:18:33.061 "num_base_bdevs_discovered": 1, 00:18:33.061 "num_base_bdevs_operational": 1, 00:18:33.061 "base_bdevs_list": [ 00:18:33.061 { 00:18:33.061 "name": null, 00:18:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.061 "is_configured": false, 00:18:33.061 "data_offset": 0, 00:18:33.061 "data_size": 7936 00:18:33.061 }, 00:18:33.061 { 00:18:33.061 "name": "BaseBdev2", 00:18:33.061 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:33.061 "is_configured": true, 00:18:33.061 "data_offset": 256, 00:18:33.061 "data_size": 7936 00:18:33.061 } 00:18:33.061 ] 00:18:33.061 }' 00:18:33.061 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.061 09:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.321 09:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.321 09:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.321 09:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.321 [2024-12-12 09:32:07.286803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.321 [2024-12-12 09:32:07.287003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.321 [2024-12-12 09:32:07.287065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:33.321 [2024-12-12 09:32:07.287108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.321 [2024-12-12 09:32:07.287421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.321 [2024-12-12 09:32:07.287483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.321 [2024-12-12 09:32:07.287598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:33.321 [2024-12-12 09:32:07.287647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.321 [2024-12-12 09:32:07.287694] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:33.321 [2024-12-12 09:32:07.287749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.321 [2024-12-12 09:32:07.308981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:33.321 spare 00:18:33.322 09:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.322 09:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:33.322 [2024-12-12 09:32:07.311665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.703 "name": "raid_bdev1", 00:18:34.703 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:34.703 "strip_size_kb": 0, 00:18:34.703 "state": "online", 00:18:34.703 "raid_level": "raid1", 00:18:34.703 "superblock": true, 00:18:34.703 "num_base_bdevs": 2, 00:18:34.703 "num_base_bdevs_discovered": 2, 00:18:34.703 "num_base_bdevs_operational": 2, 00:18:34.703 "process": { 00:18:34.703 "type": "rebuild", 00:18:34.703 "target": "spare", 00:18:34.703 "progress": { 00:18:34.703 "blocks": 2560, 00:18:34.703 "percent": 32 00:18:34.703 } 00:18:34.703 }, 00:18:34.703 "base_bdevs_list": [ 00:18:34.703 { 00:18:34.703 "name": "spare", 00:18:34.703 "uuid": "82c161f0-7ef3-5ade-902f-6bd631a1bf6d", 00:18:34.703 "is_configured": true, 00:18:34.703 "data_offset": 256, 00:18:34.703 "data_size": 7936 00:18:34.703 }, 00:18:34.703 { 00:18:34.703 "name": "BaseBdev2", 00:18:34.703 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:34.703 "is_configured": true, 00:18:34.703 "data_offset": 256, 00:18:34.703 "data_size": 7936 00:18:34.703 } 00:18:34.703 ] 00:18:34.703 }' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.703 [2024-12-12 09:32:08.446680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.703 [2024-12-12 09:32:08.522517] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:34.703 [2024-12-12 09:32:08.522737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.703 [2024-12-12 09:32:08.522784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.703 [2024-12-12 09:32:08.522810] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.703 "name": "raid_bdev1", 00:18:34.703 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:34.703 "strip_size_kb": 0, 00:18:34.703 "state": "online", 00:18:34.703 "raid_level": "raid1", 00:18:34.703 "superblock": true, 00:18:34.703 "num_base_bdevs": 2, 00:18:34.703 "num_base_bdevs_discovered": 1, 00:18:34.703 "num_base_bdevs_operational": 1, 00:18:34.703 "base_bdevs_list": [ 00:18:34.703 { 00:18:34.703 "name": null, 00:18:34.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.703 "is_configured": false, 00:18:34.703 "data_offset": 0, 00:18:34.703 "data_size": 7936 00:18:34.703 }, 00:18:34.703 { 00:18:34.703 "name": "BaseBdev2", 00:18:34.703 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:34.703 "is_configured": true, 00:18:34.703 "data_offset": 256, 00:18:34.703 "data_size": 7936 00:18:34.703 } 00:18:34.703 ] 00:18:34.703 }' 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.703 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.963 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.223 09:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.223 "name": "raid_bdev1", 00:18:35.223 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:35.223 "strip_size_kb": 0, 00:18:35.223 "state": "online", 00:18:35.223 "raid_level": "raid1", 00:18:35.223 "superblock": true, 00:18:35.223 "num_base_bdevs": 2, 00:18:35.223 "num_base_bdevs_discovered": 1, 00:18:35.223 "num_base_bdevs_operational": 1, 00:18:35.223 "base_bdevs_list": [ 00:18:35.223 { 00:18:35.223 "name": null, 00:18:35.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.223 "is_configured": false, 00:18:35.223 "data_offset": 0, 00:18:35.223 "data_size": 7936 00:18:35.223 }, 00:18:35.223 { 00:18:35.223 "name": "BaseBdev2", 00:18:35.223 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:35.223 "is_configured": true, 00:18:35.223 "data_offset": 256, 00:18:35.223 "data_size": 7936 00:18:35.223 } 00:18:35.223 ] 00:18:35.223 }' 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.223 [2024-12-12 09:32:09.119636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.223 [2024-12-12 09:32:09.119794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.223 [2024-12-12 09:32:09.119847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:35.223 [2024-12-12 09:32:09.119889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.223 [2024-12-12 09:32:09.120179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.223 [2024-12-12 09:32:09.120238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.223 [2024-12-12 09:32:09.120340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:35.223 [2024-12-12 09:32:09.120360] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.223 [2024-12-12 09:32:09.120372] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:35.223 [2024-12-12 09:32:09.120386] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:35.223 BaseBdev1 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.223 09:32:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.172 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.173 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.173 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.173 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.173 "name": "raid_bdev1", 00:18:36.173 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:36.173 "strip_size_kb": 0, 00:18:36.173 "state": "online", 00:18:36.173 "raid_level": "raid1", 00:18:36.173 "superblock": true, 00:18:36.173 "num_base_bdevs": 2, 00:18:36.173 "num_base_bdevs_discovered": 1, 00:18:36.173 "num_base_bdevs_operational": 1, 00:18:36.173 "base_bdevs_list": [ 00:18:36.173 { 00:18:36.173 "name": null, 00:18:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.173 "is_configured": false, 00:18:36.173 "data_offset": 0, 00:18:36.173 "data_size": 7936 00:18:36.173 }, 00:18:36.173 { 00:18:36.173 "name": "BaseBdev2", 00:18:36.173 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:36.173 "is_configured": true, 00:18:36.173 "data_offset": 256, 00:18:36.173 "data_size": 7936 00:18:36.173 } 00:18:36.173 ] 00:18:36.173 }' 00:18:36.173 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.173 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.758 "name": "raid_bdev1", 00:18:36.758 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:36.758 "strip_size_kb": 0, 00:18:36.758 "state": "online", 00:18:36.758 "raid_level": "raid1", 00:18:36.758 "superblock": true, 00:18:36.758 "num_base_bdevs": 2, 00:18:36.758 "num_base_bdevs_discovered": 1, 00:18:36.758 "num_base_bdevs_operational": 1, 00:18:36.758 "base_bdevs_list": [ 00:18:36.758 { 00:18:36.758 "name": null, 00:18:36.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.758 "is_configured": false, 00:18:36.758 "data_offset": 0, 00:18:36.758 "data_size": 7936 00:18:36.758 }, 00:18:36.758 { 00:18:36.758 "name": "BaseBdev2", 00:18:36.758 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:36.758 "is_configured": true, 00:18:36.758 "data_offset": 256, 00:18:36.758 "data_size": 7936 00:18:36.758 } 00:18:36.758 ] 00:18:36.758 }' 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.758 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.758 [2024-12-12 09:32:10.776984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.758 [2024-12-12 09:32:10.777283] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.758 [2024-12-12 09:32:10.777369] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:37.018 request: 00:18:37.018 { 00:18:37.018 "base_bdev": "BaseBdev1", 00:18:37.018 "raid_bdev": "raid_bdev1", 00:18:37.018 "method": "bdev_raid_add_base_bdev", 00:18:37.018 "req_id": 1 00:18:37.018 } 00:18:37.018 Got JSON-RPC error response 00:18:37.018 response: 00:18:37.018 { 00:18:37.018 "code": -22, 00:18:37.018 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:37.018 } 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.018 09:32:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.957 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.958 "name": "raid_bdev1", 00:18:37.958 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:37.958 "strip_size_kb": 0, 00:18:37.958 "state": "online", 00:18:37.958 "raid_level": "raid1", 00:18:37.958 "superblock": true, 00:18:37.958 "num_base_bdevs": 2, 00:18:37.958 "num_base_bdevs_discovered": 1, 00:18:37.958 "num_base_bdevs_operational": 1, 00:18:37.958 "base_bdevs_list": [ 00:18:37.958 { 00:18:37.958 "name": null, 00:18:37.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.958 "is_configured": false, 00:18:37.958 "data_offset": 0, 00:18:37.958 "data_size": 7936 00:18:37.958 }, 00:18:37.958 { 00:18:37.958 "name": "BaseBdev2", 00:18:37.958 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:37.958 "is_configured": true, 00:18:37.958 "data_offset": 256, 00:18:37.958 "data_size": 7936 00:18:37.958 } 00:18:37.958 ] 00:18:37.958 }' 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.958 09:32:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.217 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.217 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.217 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.217 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.217 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.477 "name": "raid_bdev1", 00:18:38.477 "uuid": "dc7f6b52-cb58-438a-ac14-c86d8794af9c", 00:18:38.477 "strip_size_kb": 0, 00:18:38.477 "state": "online", 00:18:38.477 "raid_level": "raid1", 00:18:38.477 "superblock": true, 00:18:38.477 "num_base_bdevs": 2, 00:18:38.477 "num_base_bdevs_discovered": 1, 00:18:38.477 "num_base_bdevs_operational": 1, 00:18:38.477 "base_bdevs_list": [ 00:18:38.477 { 00:18:38.477 "name": null, 00:18:38.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.477 "is_configured": false, 00:18:38.477 "data_offset": 0, 00:18:38.477 "data_size": 7936 00:18:38.477 }, 00:18:38.477 { 00:18:38.477 "name": "BaseBdev2", 00:18:38.477 "uuid": "2424b5bb-9572-5605-8127-cd2b8b1fb7a6", 00:18:38.477 "is_configured": true, 00:18:38.477 "data_offset": 256, 00:18:38.477 "data_size": 7936 00:18:38.477 } 00:18:38.477 ] 00:18:38.477 }' 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.477 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90230 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90230 ']' 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90230 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90230 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90230' 00:18:38.478 killing process with pid 90230 00:18:38.478 Received shutdown signal, test time was about 60.000000 seconds 00:18:38.478 00:18:38.478 Latency(us) 00:18:38.478 [2024-12-12T09:32:12.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.478 [2024-12-12T09:32:12.501Z] =================================================================================================================== 00:18:38.478 [2024-12-12T09:32:12.501Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90230 00:18:38.478 09:32:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90230 00:18:38.478 [2024-12-12 09:32:12.433395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.478 [2024-12-12 09:32:12.433567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.478 [2024-12-12 09:32:12.433637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.478 [2024-12-12 09:32:12.433651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:39.047 [2024-12-12 09:32:12.788405] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.429 09:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:40.429 00:18:40.429 real 0m18.117s 00:18:40.429 user 0m23.511s 00:18:40.429 sys 0m1.906s 00:18:40.429 ************************************ 00:18:40.429 END TEST raid_rebuild_test_sb_md_interleaved 00:18:40.429 ************************************ 00:18:40.429 09:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.429 09:32:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.429 09:32:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:40.429 09:32:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:40.429 09:32:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90230 ']' 00:18:40.429 09:32:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90230 00:18:40.429 09:32:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:40.429 00:18:40.429 real 12m9.371s 00:18:40.429 user 16m11.683s 00:18:40.429 sys 2m2.820s 00:18:40.429 09:32:14 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.429 09:32:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.429 ************************************ 00:18:40.429 END TEST bdev_raid 00:18:40.429 ************************************ 00:18:40.429 09:32:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:40.429 09:32:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.429 09:32:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.429 09:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:40.429 ************************************ 00:18:40.429 START TEST spdkcli_raid 00:18:40.429 ************************************ 00:18:40.429 09:32:14 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:40.429 * Looking for test storage... 00:18:40.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:40.429 09:32:14 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.429 09:32:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.429 09:32:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.690 09:32:14 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.690 --rc genhtml_branch_coverage=1 00:18:40.690 --rc genhtml_function_coverage=1 00:18:40.690 --rc genhtml_legend=1 00:18:40.690 --rc geninfo_all_blocks=1 00:18:40.690 --rc geninfo_unexecuted_blocks=1 00:18:40.690 00:18:40.690 ' 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.690 --rc genhtml_branch_coverage=1 00:18:40.690 --rc genhtml_function_coverage=1 00:18:40.690 --rc genhtml_legend=1 00:18:40.690 --rc geninfo_all_blocks=1 00:18:40.690 --rc geninfo_unexecuted_blocks=1 00:18:40.690 00:18:40.690 ' 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.690 --rc genhtml_branch_coverage=1 00:18:40.690 --rc genhtml_function_coverage=1 00:18:40.690 --rc genhtml_legend=1 00:18:40.690 --rc geninfo_all_blocks=1 00:18:40.690 --rc geninfo_unexecuted_blocks=1 00:18:40.690 00:18:40.690 ' 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.690 --rc genhtml_branch_coverage=1 00:18:40.690 --rc genhtml_function_coverage=1 00:18:40.690 --rc genhtml_legend=1 00:18:40.690 --rc geninfo_all_blocks=1 00:18:40.690 --rc geninfo_unexecuted_blocks=1 00:18:40.690 00:18:40.690 ' 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:40.690 09:32:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90907 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:40.690 09:32:14 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90907 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90907 ']' 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.690 09:32:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 [2024-12-12 09:32:14.640183] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:40.690 [2024-12-12 09:32:14.640943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90907 ] 00:18:40.950 [2024-12-12 09:32:14.823706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.210 [2024-12-12 09:32:14.975153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.210 [2024-12-12 09:32:14.975198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:42.143 09:32:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.143 09:32:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.143 09:32:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.143 09:32:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:42.143 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:42.143 ' 00:18:44.051 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:44.051 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:44.051 09:32:17 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:44.051 09:32:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.051 09:32:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.051 09:32:17 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:44.051 09:32:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.051 09:32:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.051 09:32:17 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:44.051 ' 00:18:44.990 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:45.250 09:32:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:45.250 09:32:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.250 09:32:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.250 09:32:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:45.250 09:32:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.250 09:32:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.250 09:32:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:45.250 09:32:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:45.820 09:32:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:45.820 09:32:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:45.820 09:32:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:45.820 09:32:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.820 09:32:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.820 09:32:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:45.820 09:32:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.820 09:32:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.820 09:32:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:45.820 ' 00:18:46.760 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:46.760 09:32:20 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:46.760 09:32:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.760 09:32:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.019 09:32:20 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:47.019 09:32:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.019 09:32:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.019 09:32:20 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:47.019 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:47.019 ' 00:18:48.412 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:48.412 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:48.412 09:32:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:48.412 09:32:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.412 09:32:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.412 09:32:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90907 00:18:48.412 09:32:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90907 ']' 00:18:48.412 09:32:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90907 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90907 00:18:48.672 killing process with pid 90907 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90907' 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90907 00:18:48.672 09:32:22 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90907 00:18:51.966 09:32:25 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:51.966 09:32:25 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90907 ']' 00:18:51.967 09:32:25 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90907 00:18:51.967 09:32:25 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90907 ']' 00:18:51.967 09:32:25 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90907 00:18:51.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90907) - No such process 00:18:51.967 Process with pid 90907 is not found 00:18:51.967 09:32:25 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90907 is not found' 00:18:51.967 09:32:25 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:51.967 09:32:25 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:51.967 09:32:25 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:51.967 09:32:25 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:51.967 ************************************ 00:18:51.967 END TEST spdkcli_raid 00:18:51.967 ************************************ 00:18:51.967 00:18:51.967 real 0m11.098s 00:18:51.967 user 0m22.622s 00:18:51.967 sys 0m1.368s 00:18:51.967 09:32:25 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.967 09:32:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.967 09:32:25 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:51.967 09:32:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.967 09:32:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.967 09:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:51.967 ************************************ 00:18:51.967 START TEST blockdev_raid5f 00:18:51.967 ************************************ 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:51.967 * Looking for test storage... 00:18:51.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.967 09:32:25 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.967 --rc genhtml_branch_coverage=1 00:18:51.967 --rc genhtml_function_coverage=1 00:18:51.967 --rc genhtml_legend=1 00:18:51.967 --rc geninfo_all_blocks=1 00:18:51.967 --rc geninfo_unexecuted_blocks=1 00:18:51.967 00:18:51.967 ' 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.967 --rc genhtml_branch_coverage=1 00:18:51.967 --rc genhtml_function_coverage=1 00:18:51.967 --rc genhtml_legend=1 00:18:51.967 --rc geninfo_all_blocks=1 00:18:51.967 --rc geninfo_unexecuted_blocks=1 00:18:51.967 00:18:51.967 ' 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.967 --rc genhtml_branch_coverage=1 00:18:51.967 --rc genhtml_function_coverage=1 00:18:51.967 --rc genhtml_legend=1 00:18:51.967 --rc geninfo_all_blocks=1 00:18:51.967 --rc geninfo_unexecuted_blocks=1 00:18:51.967 00:18:51.967 ' 00:18:51.967 09:32:25 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.967 --rc genhtml_branch_coverage=1 00:18:51.967 --rc genhtml_function_coverage=1 00:18:51.967 --rc genhtml_legend=1 00:18:51.967 --rc geninfo_all_blocks=1 00:18:51.967 --rc geninfo_unexecuted_blocks=1 00:18:51.967 00:18:51.967 ' 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:51.967 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=91198 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 91198 00:18:51.968 09:32:25 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 91198 ']' 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.968 09:32:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:51.968 [2024-12-12 09:32:25.802198] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:51.968 [2024-12-12 09:32:25.802433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91198 ] 00:18:51.968 [2024-12-12 09:32:25.985934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.228 [2024-12-12 09:32:26.131297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.166 09:32:27 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.166 09:32:27 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:53.166 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:53.166 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:53.166 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:53.166 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.166 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 Malloc0 00:18:53.425 Malloc1 00:18:53.425 Malloc2 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.425 09:32:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:53.683 09:32:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8e9f642a-7b22-4ba5-a899-a620b9a1220f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8e9f642a-7b22-4ba5-a899-a620b9a1220f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8e9f642a-7b22-4ba5-a899-a620b9a1220f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "40ef94e6-516a-44ee-bcce-52af131c7e6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e2b25b33-01aa-4d2a-b0cd-403b77c56888",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fa52c32d-eb77-47b5-aa45-808459a1894a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:53.683 09:32:27 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 91198 00:18:53.683 09:32:27 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 91198 ']' 00:18:53.683 09:32:27 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 91198 00:18:53.683 09:32:27 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:53.683 09:32:27 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91198 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91198' 00:18:53.684 killing process with pid 91198 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 91198 00:18:53.684 09:32:27 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 91198 00:18:56.971 09:32:30 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:56.971 09:32:30 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:56.971 09:32:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:56.971 09:32:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.971 09:32:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.971 ************************************ 00:18:56.971 START TEST bdev_hello_world 00:18:56.971 ************************************ 00:18:56.971 09:32:30 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:56.971 [2024-12-12 09:32:30.850427] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:56.971 [2024-12-12 09:32:30.850720] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91271 ] 00:18:57.229 [2024-12-12 09:32:31.032021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.229 [2024-12-12 09:32:31.183481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.165 [2024-12-12 09:32:31.835765] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:58.165 [2024-12-12 09:32:31.835851] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:58.165 [2024-12-12 09:32:31.835878] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:58.165 [2024-12-12 09:32:31.836502] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:58.165 [2024-12-12 09:32:31.836702] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:58.165 [2024-12-12 09:32:31.836722] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:58.165 [2024-12-12 09:32:31.836796] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:58.165 00:18:58.165 [2024-12-12 09:32:31.836821] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:00.068 00:19:00.068 real 0m2.823s 00:19:00.068 user 0m2.297s 00:19:00.068 sys 0m0.401s 00:19:00.068 09:32:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.068 ************************************ 00:19:00.068 END TEST bdev_hello_world 00:19:00.068 ************************************ 00:19:00.068 09:32:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:00.068 09:32:33 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:00.068 09:32:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.068 09:32:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.068 09:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.068 ************************************ 00:19:00.068 START TEST bdev_bounds 00:19:00.068 ************************************ 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=91326 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.068 Process bdevio pid: 91326 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 91326' 00:19:00.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 91326 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 91326 ']' 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.068 09:32:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:00.068 [2024-12-12 09:32:33.742791] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:00.068 [2024-12-12 09:32:33.743023] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91326 ] 00:19:00.068 [2024-12-12 09:32:33.913629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.068 [2024-12-12 09:32:34.070700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.068 [2024-12-12 09:32:34.070893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.068 [2024-12-12 09:32:34.070940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.003 09:32:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.003 09:32:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:01.003 09:32:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:01.003 I/O targets: 00:19:01.003 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:01.003 00:19:01.003 00:19:01.003 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.003 http://cunit.sourceforge.net/ 00:19:01.003 00:19:01.003 00:19:01.003 Suite: bdevio tests on: raid5f 00:19:01.003 Test: blockdev write read block ...passed 00:19:01.003 Test: blockdev write zeroes read block ...passed 00:19:01.003 Test: blockdev write zeroes read no split ...passed 00:19:01.262 Test: blockdev write zeroes read split ...passed 00:19:01.262 Test: blockdev write zeroes read split partial ...passed 00:19:01.262 Test: blockdev reset ...passed 00:19:01.262 Test: blockdev write read 8 blocks ...passed 00:19:01.262 Test: blockdev write read size > 128k ...passed 00:19:01.262 Test: blockdev write read invalid size ...passed 00:19:01.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:01.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:01.262 Test: blockdev write read max offset ...passed 00:19:01.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:01.262 Test: blockdev writev readv 8 blocks ...passed 00:19:01.262 Test: blockdev writev readv 30 x 1block ...passed 00:19:01.262 Test: blockdev writev readv block ...passed 00:19:01.262 Test: blockdev writev readv size > 128k ...passed 00:19:01.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:01.262 Test: blockdev comparev and writev ...passed 00:19:01.262 Test: blockdev nvme passthru rw ...passed 00:19:01.262 Test: blockdev nvme passthru vendor specific ...passed 00:19:01.262 Test: blockdev nvme admin passthru ...passed 00:19:01.262 Test: blockdev copy ...passed 00:19:01.262 00:19:01.262 Run Summary: Type Total Ran Passed Failed Inactive 00:19:01.262 suites 1 1 n/a 0 0 00:19:01.262 tests 23 23 23 0 0 00:19:01.262 asserts 130 130 130 0 n/a 00:19:01.262 00:19:01.262 Elapsed time = 0.707 seconds 00:19:01.262 0 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 91326 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 91326 ']' 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 91326 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91326 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91326' 00:19:01.262 killing process with pid 91326 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 91326 00:19:01.262 09:32:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 91326 00:19:03.202 09:32:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:03.202 00:19:03.202 real 0m3.336s 00:19:03.202 user 0m8.292s 00:19:03.202 sys 0m0.516s 00:19:03.202 09:32:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.202 ************************************ 00:19:03.202 END TEST bdev_bounds 00:19:03.202 ************************************ 00:19:03.202 09:32:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:03.202 09:32:37 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.202 09:32:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:03.202 09:32:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.202 09:32:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.202 ************************************ 00:19:03.202 START TEST bdev_nbd 00:19:03.202 ************************************ 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91391 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91391 /var/tmp/spdk-nbd.sock 00:19:03.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91391 ']' 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.202 09:32:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.202 [2024-12-12 09:32:37.148448] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:03.202 [2024-12-12 09:32:37.148699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.462 [2024-12-12 09:32:37.328040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.462 [2024-12-12 09:32:37.480227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.398 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.399 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.658 1+0 records in 00:19:04.658 1+0 records out 00:19:04.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003694 s, 11.1 MB/s 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.658 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.916 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:04.916 { 00:19:04.916 "nbd_device": "/dev/nbd0", 00:19:04.916 "bdev_name": "raid5f" 00:19:04.916 } 00:19:04.916 ]' 00:19:04.916 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:04.916 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:04.916 { 00:19:04.916 "nbd_device": "/dev/nbd0", 00:19:04.917 "bdev_name": "raid5f" 00:19:04.917 } 00:19:04.917 ]' 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.917 09:32:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.175 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.433 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.434 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:05.692 /dev/nbd0 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.692 1+0 records in 00:19:05.692 1+0 records out 00:19:05.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386525 s, 10.6 MB/s 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.692 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:05.951 { 00:19:05.951 "nbd_device": "/dev/nbd0", 00:19:05.951 "bdev_name": "raid5f" 00:19:05.951 } 00:19:05.951 ]' 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:05.951 { 00:19:05.951 "nbd_device": "/dev/nbd0", 00:19:05.951 "bdev_name": "raid5f" 00:19:05.951 } 00:19:05.951 ]' 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:05.951 256+0 records in 00:19:05.951 256+0 records out 00:19:05.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504553 s, 208 MB/s 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:05.951 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:06.210 256+0 records in 00:19:06.210 256+0 records out 00:19:06.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343032 s, 30.6 MB/s 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.210 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:06.211 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:06.211 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:06.211 09:32:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.211 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.471 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:06.731 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:06.991 malloc_lvol_verify 00:19:06.991 09:32:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:07.251 b501e33d-243b-44b1-a90e-cdc677ab4653 00:19:07.251 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:07.251 b30408ce-ffde-4514-a059-708d925f86a5 00:19:07.251 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:07.511 /dev/nbd0 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:07.511 mke2fs 1.47.0 (5-Feb-2023) 00:19:07.511 Discarding device blocks: 0/4096 done 00:19:07.511 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:07.511 00:19:07.511 Allocating group tables: 0/1 done 00:19:07.511 Writing inode tables: 0/1 done 00:19:07.511 Creating journal (1024 blocks): done 00:19:07.511 Writing superblocks and filesystem accounting information: 0/1 done 00:19:07.511 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.511 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91391 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91391 ']' 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91391 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91391 00:19:07.771 killing process with pid 91391 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91391' 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91391 00:19:07.771 09:32:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91391 00:19:09.679 09:32:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:09.679 00:19:09.679 real 0m6.478s 00:19:09.679 user 0m8.637s 00:19:09.679 sys 0m1.574s 00:19:09.679 09:32:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.679 09:32:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:09.679 ************************************ 00:19:09.679 END TEST bdev_nbd 00:19:09.679 ************************************ 00:19:09.679 09:32:43 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:09.679 09:32:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:09.679 09:32:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:09.679 09:32:43 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:09.679 09:32:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.679 09:32:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.679 09:32:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.679 ************************************ 00:19:09.679 START TEST bdev_fio 00:19:09.679 ************************************ 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:09.679 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:09.679 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:09.939 ************************************ 00:19:09.939 START TEST bdev_fio_rw_verify 00:19:09.939 ************************************ 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.939 09:32:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.199 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:10.199 fio-3.35 00:19:10.199 Starting 1 thread 00:19:22.405 00:19:22.405 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91606: Thu Dec 12 09:32:55 2024 00:19:22.405 read: IOPS=9369, BW=36.6MiB/s (38.4MB/s)(366MiB/10001msec) 00:19:22.405 slat (usec): min=19, max=176, avg=26.12, stdev= 3.82 00:19:22.405 clat (usec): min=12, max=528, avg=168.39, stdev=64.93 00:19:22.405 lat (usec): min=38, max=555, avg=194.50, stdev=66.04 00:19:22.405 clat percentiles (usec): 00:19:22.405 | 50.000th=[ 167], 99.000th=[ 310], 99.900th=[ 375], 99.990th=[ 429], 00:19:22.405 | 99.999th=[ 529] 00:19:22.405 write: IOPS=9794, BW=38.3MiB/s (40.1MB/s)(378MiB/9883msec); 0 zone resets 00:19:22.405 slat (usec): min=8, max=181, avg=21.51, stdev= 5.72 00:19:22.405 clat (usec): min=77, max=1075, avg=393.87, stdev=63.85 00:19:22.405 lat (usec): min=97, max=1245, avg=415.37, stdev=66.07 00:19:22.405 clat percentiles (usec): 00:19:22.405 | 50.000th=[ 392], 99.000th=[ 562], 99.900th=[ 685], 99.990th=[ 971], 00:19:22.405 | 99.999th=[ 1074] 00:19:22.405 bw ( KiB/s): min=32792, max=46392, per=99.30%, avg=38901.89, stdev=3746.19, samples=19 00:19:22.405 iops : min= 8198, max=11598, avg=9725.47, stdev=936.55, samples=19 00:19:22.405 lat (usec) : 20=0.01%, 50=0.01%, 100=9.42%, 250=33.34%, 500=54.79% 00:19:22.405 lat (usec) : 750=2.42%, 1000=0.02% 00:19:22.405 lat (msec) : 2=0.01% 00:19:22.405 cpu : usr=98.74%, sys=0.53%, ctx=30, majf=0, minf=7980 00:19:22.405 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.405 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.405 issued rwts: total=93703,96796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:22.405 00:19:22.405 Run status group 0 (all jobs): 00:19:22.405 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=366MiB (384MB), run=10001-10001msec 00:19:22.405 WRITE: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=378MiB (396MB), run=9883-9883msec 00:19:22.971 ----------------------------------------------------- 00:19:22.971 Suppressions used: 00:19:22.971 count bytes template 00:19:22.971 1 7 /usr/src/fio/parse.c 00:19:22.971 188 18048 /usr/src/fio/iolog.c 00:19:22.971 1 8 libtcmalloc_minimal.so 00:19:22.971 1 904 libcrypto.so 00:19:22.971 ----------------------------------------------------- 00:19:22.971 00:19:22.971 00:19:22.971 real 0m13.225s 00:19:22.971 user 0m13.197s 00:19:22.971 sys 0m0.696s 00:19:22.971 09:32:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.971 09:32:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:22.971 ************************************ 00:19:22.971 END TEST bdev_fio_rw_verify 00:19:22.971 ************************************ 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8e9f642a-7b22-4ba5-a899-a620b9a1220f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8e9f642a-7b22-4ba5-a899-a620b9a1220f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8e9f642a-7b22-4ba5-a899-a620b9a1220f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "40ef94e6-516a-44ee-bcce-52af131c7e6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e2b25b33-01aa-4d2a-b0cd-403b77c56888",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fa52c32d-eb77-47b5-aa45-808459a1894a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.230 /home/vagrant/spdk_repo/spdk 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:23.230 00:19:23.230 real 0m13.502s 00:19:23.230 user 0m13.311s 00:19:23.230 sys 0m0.830s 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.230 ************************************ 00:19:23.230 END TEST bdev_fio 00:19:23.230 ************************************ 00:19:23.230 09:32:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:23.230 09:32:57 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.230 09:32:57 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.230 09:32:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:23.230 09:32:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.230 09:32:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.230 ************************************ 00:19:23.230 START TEST bdev_verify 00:19:23.230 ************************************ 00:19:23.230 09:32:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.230 [2024-12-12 09:32:57.251234] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:23.230 [2024-12-12 09:32:57.251384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91774 ] 00:19:23.490 [2024-12-12 09:32:57.430256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.792 [2024-12-12 09:32:57.579615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.792 [2024-12-12 09:32:57.579665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.361 Running I/O for 5 seconds... 00:19:26.238 8455.00 IOPS, 33.03 MiB/s [2024-12-12T09:33:01.656Z] 8469.00 IOPS, 33.08 MiB/s [2024-12-12T09:33:02.592Z] 8430.67 IOPS, 32.93 MiB/s [2024-12-12T09:33:03.544Z] 8328.75 IOPS, 32.53 MiB/s [2024-12-12T09:33:03.544Z] 8270.80 IOPS, 32.31 MiB/s 00:19:29.521 Latency(us) 00:19:29.521 [2024-12-12T09:33:03.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.521 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:29.521 Verification LBA range: start 0x0 length 0x2000 00:19:29.521 raid5f : 5.03 4628.40 18.08 0.00 0.00 41760.42 198.54 30678.86 00:19:29.521 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.521 Verification LBA range: start 0x2000 length 0x2000 00:19:29.522 raid5f : 5.02 3625.27 14.16 0.00 0.00 53152.16 2275.16 38920.94 00:19:29.522 [2024-12-12T09:33:03.545Z] =================================================================================================================== 00:19:29.522 [2024-12-12T09:33:03.545Z] Total : 8253.67 32.24 0.00 0.00 46759.29 198.54 38920.94 00:19:31.430 00:19:31.430 real 0m7.923s 00:19:31.430 user 0m14.470s 00:19:31.430 sys 0m0.393s 00:19:31.430 ************************************ 00:19:31.430 END TEST bdev_verify 00:19:31.430 ************************************ 00:19:31.430 09:33:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.430 09:33:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:31.430 09:33:05 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:31.430 09:33:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:31.430 09:33:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.430 09:33:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.430 ************************************ 00:19:31.430 START TEST bdev_verify_big_io 00:19:31.430 ************************************ 00:19:31.430 09:33:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:31.430 [2024-12-12 09:33:05.237056] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:31.430 [2024-12-12 09:33:05.237353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91875 ] 00:19:31.430 [2024-12-12 09:33:05.422874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.689 [2024-12-12 09:33:05.593807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.689 [2024-12-12 09:33:05.593819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.629 Running I/O for 5 seconds... 00:19:34.542 441.00 IOPS, 27.56 MiB/s [2024-12-12T09:33:09.942Z] 507.00 IOPS, 31.69 MiB/s [2024-12-12T09:33:10.877Z] 528.00 IOPS, 33.00 MiB/s [2024-12-12T09:33:11.813Z] 554.25 IOPS, 34.64 MiB/s [2024-12-12T09:33:11.813Z] 558.40 IOPS, 34.90 MiB/s 00:19:37.790 Latency(us) 00:19:37.790 [2024-12-12T09:33:11.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.790 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:37.790 Verification LBA range: start 0x0 length 0x200 00:19:37.790 raid5f : 5.43 280.48 17.53 0.00 0.00 11125798.98 253.99 527493.25 00:19:37.790 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:37.790 Verification LBA range: start 0x200 length 0x200 00:19:37.790 raid5f : 5.41 281.72 17.61 0.00 0.00 11002735.83 329.11 527493.25 00:19:37.790 [2024-12-12T09:33:11.813Z] =================================================================================================================== 00:19:37.790 [2024-12-12T09:33:11.813Z] Total : 562.20 35.14 0.00 0.00 11064227.00 253.99 527493.25 00:19:39.704 00:19:39.704 real 0m8.454s 00:19:39.704 user 0m15.488s 00:19:39.704 sys 0m0.419s 00:19:39.704 ************************************ 00:19:39.704 END TEST bdev_verify_big_io 00:19:39.704 ************************************ 00:19:39.704 09:33:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.704 09:33:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.704 09:33:13 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.704 09:33:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:39.704 09:33:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.704 09:33:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.704 ************************************ 00:19:39.704 START TEST bdev_write_zeroes 00:19:39.704 ************************************ 00:19:39.704 09:33:13 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.963 [2024-12-12 09:33:13.758495] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:39.963 [2024-12-12 09:33:13.758794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91979 ] 00:19:39.963 [2024-12-12 09:33:13.943217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.223 [2024-12-12 09:33:14.102716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.790 Running I/O for 1 seconds... 00:19:42.164 18927.00 IOPS, 73.93 MiB/s 00:19:42.164 Latency(us) 00:19:42.164 [2024-12-12T09:33:16.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.164 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:42.164 raid5f : 1.01 18913.28 73.88 0.00 0.00 6739.98 2031.90 9444.05 00:19:42.164 [2024-12-12T09:33:16.187Z] =================================================================================================================== 00:19:42.164 [2024-12-12T09:33:16.187Z] Total : 18913.28 73.88 0.00 0.00 6739.98 2031.90 9444.05 00:19:44.066 00:19:44.066 real 0m3.997s 00:19:44.066 user 0m3.444s 00:19:44.066 sys 0m0.413s 00:19:44.066 09:33:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.066 09:33:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:44.066 ************************************ 00:19:44.066 END TEST bdev_write_zeroes 00:19:44.066 ************************************ 00:19:44.066 09:33:17 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.066 09:33:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:44.066 09:33:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.066 09:33:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.066 ************************************ 00:19:44.066 START TEST bdev_json_nonenclosed 00:19:44.066 ************************************ 00:19:44.066 09:33:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.066 [2024-12-12 09:33:17.824311] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:44.066 [2024-12-12 09:33:17.824537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92042 ] 00:19:44.066 [2024-12-12 09:33:18.001028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.323 [2024-12-12 09:33:18.168487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.323 [2024-12-12 09:33:18.168619] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:44.323 [2024-12-12 09:33:18.168656] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:44.324 [2024-12-12 09:33:18.168670] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:44.581 00:19:44.581 real 0m0.772s 00:19:44.581 user 0m0.502s 00:19:44.581 sys 0m0.163s 00:19:44.581 09:33:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.581 09:33:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:44.581 ************************************ 00:19:44.581 END TEST bdev_json_nonenclosed 00:19:44.581 ************************************ 00:19:44.581 09:33:18 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.581 09:33:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:44.581 09:33:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.581 09:33:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.581 ************************************ 00:19:44.581 START TEST bdev_json_nonarray 00:19:44.581 ************************************ 00:19:44.581 09:33:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.839 [2024-12-12 09:33:18.666782] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:44.839 [2024-12-12 09:33:18.666951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92069 ] 00:19:44.839 [2024-12-12 09:33:18.846995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.097 [2024-12-12 09:33:18.998062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.097 [2024-12-12 09:33:18.998206] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:45.097 [2024-12-12 09:33:18.998227] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:45.097 [2024-12-12 09:33:18.998248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.355 00:19:45.355 real 0m0.717s 00:19:45.355 user 0m0.454s 00:19:45.355 sys 0m0.157s 00:19:45.355 09:33:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.355 09:33:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:45.355 ************************************ 00:19:45.355 END TEST bdev_json_nonarray 00:19:45.355 ************************************ 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:45.355 09:33:19 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:45.355 00:19:45.355 real 0m53.909s 00:19:45.355 user 1m12.061s 00:19:45.355 sys 0m6.175s 00:19:45.355 09:33:19 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.355 09:33:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.355 ************************************ 00:19:45.355 END TEST blockdev_raid5f 00:19:45.355 ************************************ 00:19:45.614 09:33:19 -- spdk/autotest.sh@194 -- # uname -s 00:19:45.614 09:33:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:45.614 09:33:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.614 09:33:19 -- common/autotest_common.sh@10 -- # set +x 00:19:45.614 09:33:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:45.614 09:33:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:45.614 09:33:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:45.614 09:33:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:45.614 09:33:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.614 09:33:19 -- common/autotest_common.sh@10 -- # set +x 00:19:45.614 09:33:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:45.614 09:33:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:45.614 09:33:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:45.614 09:33:19 -- common/autotest_common.sh@10 -- # set +x 00:19:47.513 INFO: APP EXITING 00:19:47.513 INFO: killing all VMs 00:19:47.513 INFO: killing vhost app 00:19:47.513 INFO: EXIT DONE 00:19:48.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.080 Waiting for block devices as requested 00:19:48.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.339 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.276 Cleaning 00:19:49.276 Removing: /var/run/dpdk/spdk0/config 00:19:49.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:49.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:49.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:49.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:49.276 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:49.276 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:49.276 Removing: /dev/shm/spdk_tgt_trace.pid58057 00:19:49.276 Removing: /var/run/dpdk/spdk0 00:19:49.276 Removing: /var/run/dpdk/spdk_pid57816 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58057 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58290 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58401 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58446 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58585 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58603 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58813 00:19:49.276 Removing: /var/run/dpdk/spdk_pid58925 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59032 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59154 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59262 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59307 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59338 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59414 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59520 00:19:49.276 Removing: /var/run/dpdk/spdk_pid59967 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60037 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60111 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60127 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60277 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60304 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60457 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60479 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60548 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60572 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60636 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60665 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60860 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60902 00:19:49.276 Removing: /var/run/dpdk/spdk_pid60991 00:19:49.276 Removing: /var/run/dpdk/spdk_pid62354 00:19:49.276 Removing: /var/run/dpdk/spdk_pid62567 00:19:49.277 Removing: /var/run/dpdk/spdk_pid62707 00:19:49.277 Removing: /var/run/dpdk/spdk_pid63350 00:19:49.277 Removing: /var/run/dpdk/spdk_pid63562 00:19:49.277 Removing: /var/run/dpdk/spdk_pid63706 00:19:49.277 Removing: /var/run/dpdk/spdk_pid64351 00:19:49.277 Removing: /var/run/dpdk/spdk_pid64681 00:19:49.277 Removing: /var/run/dpdk/spdk_pid64821 00:19:49.277 Removing: /var/run/dpdk/spdk_pid66206 00:19:49.277 Removing: /var/run/dpdk/spdk_pid66459 00:19:49.277 Removing: /var/run/dpdk/spdk_pid66606 00:19:49.277 Removing: /var/run/dpdk/spdk_pid67993 00:19:49.277 Removing: /var/run/dpdk/spdk_pid68256 00:19:49.277 Removing: /var/run/dpdk/spdk_pid68398 00:19:49.277 Removing: /var/run/dpdk/spdk_pid69783 00:19:49.277 Removing: /var/run/dpdk/spdk_pid70231 00:19:49.277 Removing: /var/run/dpdk/spdk_pid70377 00:19:49.277 Removing: /var/run/dpdk/spdk_pid71865 00:19:49.277 Removing: /var/run/dpdk/spdk_pid72124 00:19:49.277 Removing: /var/run/dpdk/spdk_pid72270 00:19:49.277 Removing: /var/run/dpdk/spdk_pid73766 00:19:49.537 Removing: /var/run/dpdk/spdk_pid74025 00:19:49.537 Removing: /var/run/dpdk/spdk_pid74176 00:19:49.537 Removing: /var/run/dpdk/spdk_pid75656 00:19:49.537 Removing: /var/run/dpdk/spdk_pid76143 00:19:49.537 Removing: /var/run/dpdk/spdk_pid76289 00:19:49.537 Removing: /var/run/dpdk/spdk_pid76441 00:19:49.537 Removing: /var/run/dpdk/spdk_pid76863 00:19:49.537 Removing: /var/run/dpdk/spdk_pid77594 00:19:49.537 Removing: /var/run/dpdk/spdk_pid77989 00:19:49.537 Removing: /var/run/dpdk/spdk_pid78683 00:19:49.537 Removing: /var/run/dpdk/spdk_pid79129 00:19:49.537 Removing: /var/run/dpdk/spdk_pid79884 00:19:49.537 Removing: /var/run/dpdk/spdk_pid80294 00:19:49.537 Removing: /var/run/dpdk/spdk_pid82273 00:19:49.537 Removing: /var/run/dpdk/spdk_pid82711 00:19:49.537 Removing: /var/run/dpdk/spdk_pid83157 00:19:49.537 Removing: /var/run/dpdk/spdk_pid85252 00:19:49.537 Removing: /var/run/dpdk/spdk_pid85739 00:19:49.537 Removing: /var/run/dpdk/spdk_pid86266 00:19:49.537 Removing: /var/run/dpdk/spdk_pid87336 00:19:49.537 Removing: /var/run/dpdk/spdk_pid87665 00:19:49.537 Removing: /var/run/dpdk/spdk_pid88619 00:19:49.537 Removing: /var/run/dpdk/spdk_pid88949 00:19:49.537 Removing: /var/run/dpdk/spdk_pid89901 00:19:49.537 Removing: /var/run/dpdk/spdk_pid90230 00:19:49.537 Removing: /var/run/dpdk/spdk_pid90907 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91198 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91271 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91326 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91591 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91774 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91875 00:19:49.537 Removing: /var/run/dpdk/spdk_pid91979 00:19:49.537 Removing: /var/run/dpdk/spdk_pid92042 00:19:49.537 Removing: /var/run/dpdk/spdk_pid92069 00:19:49.537 Clean 00:19:49.537 09:33:23 -- common/autotest_common.sh@1453 -- # return 0 00:19:49.537 09:33:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:49.537 09:33:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.537 09:33:23 -- common/autotest_common.sh@10 -- # set +x 00:19:49.796 09:33:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:49.796 09:33:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.796 09:33:23 -- common/autotest_common.sh@10 -- # set +x 00:19:49.796 09:33:23 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:49.796 09:33:23 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:49.796 09:33:23 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:49.796 09:33:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:49.796 09:33:23 -- spdk/autotest.sh@398 -- # hostname 00:19:49.796 09:33:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:50.056 geninfo: WARNING: invalid characters removed from testname! 00:20:16.612 09:33:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.181 09:33:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.716 09:33:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.251 09:33:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:24.793 09:33:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:27.365 09:34:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:29.901 09:34:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:29.901 09:34:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:29.901 09:34:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:29.901 09:34:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:29.901 09:34:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:29.901 09:34:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:29.902 + [[ -n 5430 ]] 00:20:29.902 + sudo kill 5430 00:20:29.911 [Pipeline] } 00:20:29.925 [Pipeline] // timeout 00:20:29.930 [Pipeline] } 00:20:29.942 [Pipeline] // stage 00:20:29.945 [Pipeline] } 00:20:29.956 [Pipeline] // catchError 00:20:29.965 [Pipeline] stage 00:20:29.968 [Pipeline] { (Stop VM) 00:20:29.981 [Pipeline] sh 00:20:30.265 + vagrant halt 00:20:32.807 ==> default: Halting domain... 00:20:40.939 [Pipeline] sh 00:20:41.221 + vagrant destroy -f 00:20:43.766 ==> default: Removing domain... 00:20:44.039 [Pipeline] sh 00:20:44.324 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:44.333 [Pipeline] } 00:20:44.348 [Pipeline] // stage 00:20:44.354 [Pipeline] } 00:20:44.367 [Pipeline] // dir 00:20:44.373 [Pipeline] } 00:20:44.387 [Pipeline] // wrap 00:20:44.393 [Pipeline] } 00:20:44.406 [Pipeline] // catchError 00:20:44.414 [Pipeline] stage 00:20:44.416 [Pipeline] { (Epilogue) 00:20:44.429 [Pipeline] sh 00:20:44.719 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:51.307 [Pipeline] catchError 00:20:51.309 [Pipeline] { 00:20:51.317 [Pipeline] sh 00:20:51.594 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:51.594 Artifacts sizes are good 00:20:51.603 [Pipeline] } 00:20:51.617 [Pipeline] // catchError 00:20:51.627 [Pipeline] archiveArtifacts 00:20:51.633 Archiving artifacts 00:20:51.759 [Pipeline] cleanWs 00:20:51.775 [WS-CLEANUP] Deleting project workspace... 00:20:51.775 [WS-CLEANUP] Deferred wipeout is used... 00:20:51.781 [WS-CLEANUP] done 00:20:51.781 [Pipeline] } 00:20:51.791 [Pipeline] // stage 00:20:51.794 [Pipeline] } 00:20:51.803 [Pipeline] // node 00:20:51.806 [Pipeline] End of Pipeline 00:20:51.841 Finished: SUCCESS